โœจ 3 - Selection ์„ ํƒ

Selection is the choice of those individuals that will participate in creating offspring for the next population, that is, for the next generation.
์„ ํƒ์€ ๋‹ค์Œ ์„ธ๋Œ€์˜ ๊ฐœ์ฒด๊ตฐ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์ฐธ์—ฌํ•  ๊ฐœ์ฒด๋“ค์„ ์„ ํƒํ•˜๋Š” ๊ณผ์ •, ์ฆ‰ ๋‹ค์Œ ์„ธ๋Œ€๋ฅผ ์œ„ํ•œ ๊ฐœ์ฒด๋“ค์„ ๊ณ ๋ฅด๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.

Such a choice is made by the principle of natural selection, according to which the most adapted individuals have the highest chances of participating in the creation of new individuals.
์ด๋Ÿฌํ•œ ์„ ํƒ์€ ์ž์—ฐ ์„ ํƒ์˜ ์›๋ฆฌ์— ๋”ฐ๋ผ ์ด๋ฃจ์–ด์ง€๋ฉฐ, ์ด ์›๋ฆฌ์— ๋”ฐ๋ฅด๋ฉด ๊ฐ€์žฅ ์ž˜ ์ ์‘ํ•œ ๊ฐœ์ฒด๋“ค์ด ์ƒˆ๋กœ์šด ๊ฐœ์ฒด ์ƒ์„ฑ์— ์ฐธ์—ฌํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์Šต๋‹ˆ๋‹ค.

As a result, an intermediate population (or parent pool) appears.
๊ทธ ๊ฒฐ๊ณผ๋กœ ์ค‘๊ฐ„ ๊ฐœ์ฒด๊ตฐ(๋˜๋Š” ๋ถ€๋ชจ ํ’€)์ด ํ˜•์„ฑ๋ฉ๋‹ˆ๋‹ค.

An intermediate population is a set of individuals that have acquired the right to breed.
์ค‘๊ฐ„ ๊ฐœ์ฒด๊ตฐ์€ ๋ฒˆ์‹ํ•  ๊ถŒ๋ฆฌ๋ฅผ ํš๋“ํ•œ ๊ฐœ์ฒด๋“ค์˜ ์ง‘ํ•ฉ์ž…๋‹ˆ๋‹ค.

Adapted individuals can be recorded there several times.
์ ์‘๋œ ๊ฐœ์ฒด๋Š” ์ด ์ค‘๊ฐ„ ๊ฐœ์ฒด๊ตฐ์— ์—ฌ๋Ÿฌ ๋ฒˆ ํฌํ•จ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

The abandoned individuals will most likely not get there at all.
์„ ํƒ๋˜์ง€ ์•Š์€ ๊ฐœ์ฒด๋“ค์€ ๋Œ€๋ถ€๋ถ„ ์ด ์ง‘ํ•ฉ์— ํฌํ•จ๋˜์ง€ ๋ชปํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

NOTE : It is important to understand that the same individual can be selected several times by the selection method, which means it can repeatedly participate in the process of creating new individuals.
์ฐธ๊ณ  : ๋™์ผํ•œ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์—ฌ๋Ÿฌ ๋ฒˆ ์„ ํƒ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋ฉฐ, ์ด๋Š” ๊ทธ ๊ฐœ์ฒด๊ฐ€ ๋ฐ˜๋ณต์ ์œผ๋กœ ์ƒˆ๋กœ์šด ๊ฐœ์ฒด ์ƒ์„ฑ ๊ณผ์ •์— ์ฐธ์—ฌํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.


๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
participate์ฐธ์—ฌํ•˜๋‹ค
intermediate population์ค‘๊ฐ„ ๊ฐœ์ฒด๊ตฐ
parent pool๋ถ€๋ชจ ํ’€
right๊ถŒ๋ฆฌ
acquireํš๋“ํ•˜๋‹ค
breed๋ฒˆ์‹ํ•˜๋‹ค
abandoned๋ฒ„๋ ค์ง„, ์„ ํƒ๋˜์ง€ ์•Š์€ ๊ฐœ์ฒด
repeatedly๋ฐ˜๋ณต์ ์œผ๋กœ

โœจ 3 - Structure ๊ตฌ์กฐ

In this chapter, we will look at the following selection methods:
์ด๋ฒˆ ์žฅ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์„ ํƒ ๋ฐฉ๋ฒ•๋“ค์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

Tournament selection
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ

Proportional selection
๋น„๋ก€ ์„ ํƒ

Stochastic universal sampling selection
ํ™•๋ฅ ์  ์ „์—ญ ์ƒ˜ํ”Œ๋ง ์„ ํƒ

Rank selection
์ˆœ์œ„ ๊ธฐ๋ฐ˜ ์„ ํƒ

Elite selection
์—˜๋ฆฌํŠธ ์„ ํƒ


โœจ 3 - Objectives ํ•™์Šต ๋ชฉํ‘œ

Introduce basic selection methods
๊ธฐ๋ณธ์ ์ธ ์„ ํƒ ๋ฐฉ๋ฒ•๋“ค์„ ์†Œ๊ฐœํ•ฉ๋‹ˆ๋‹ค.

Understand key features of each method
๊ฐ ๋ฐฉ๋ฒ•์˜ ํ•ต์‹ฌ ํŠน์ง•์„ ์ดํ•ดํ•ฉ๋‹ˆ๋‹ค.

Get practical examples
์‹ค์šฉ์ ์ธ ์˜ˆ์ œ๋“ค์„ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค.


โœจ 3.1 Tournament selection ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ

Tournament selection is one of the simplest selection methods, and we will start with it.
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ์€ ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ์„ ํƒ ๋ฐฉ์‹ ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ, ์ด๋ฒˆ์—๋Š” ์ด ๋ฐฉ๋ฒ•๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

In tournament selection, a subgroup is selected in a population, and then the best individual in this subgroup is selected.
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ์—์„œ๋Š” ๊ฐœ์ฒด๊ตฐ ๋‚ด์—์„œ ํ•˜๋‚˜์˜ ํ•˜์œ„ ์ง‘๋‹จ์„ ์„ ํƒํ•œ ๋‹ค์Œ, ๊ทธ ์ค‘ ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ๊ฐœ์ฒด๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค.

Typically, the size of a subgroup is 2 or 3 individuals.
์ผ๋ฐ˜์ ์œผ๋กœ ํ•˜์œ„ ์ง‘๋‹จ์˜ ํฌ๊ธฐ๋Š” 2๋ช… ๋˜๋Š” 3๋ช…์ž…๋‹ˆ๋‹ค.

Tournament selection method can be illustrated by the following script
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ ๋ฐฉ์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์Šคํฌ๋ฆฝํŠธ๋กœ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.


Import part

import random  
import pandas as pd  
import matplotlib.pyplot as plt  
from ch3.individual import Individual  

Tournament selection

POPULATION_SIZE = 10  
TOURNAMENT_SIZE = 3  
population = Individual.create_random_population(POPULATION_SIZE)  
candidates = [random.choice(population) for _ in range(TOURNAMENT_SIZE)]  
best = [max(candidates, key = lambda ind: ind.fitness)]  

Visualization

def plot_individuals(individual_set):  
    df = pd.DataFrame({  
        'name': [ind.name for ind in individual_set],  
        'fitness': [ind.fitness for ind in individual_set]  
    })  
    df.plot.bar(x = 'name', y = 'fitness', rot = 0)  
    plt.show()  
 
plot_individuals(population)  
plot_individuals(candidates)  
plot_individuals(best)  

Letโ€™s take a look at the following figure 3.1 for visualization of random population with fitness function:
๋ฌด์ž‘์œ„ ๊ฐœ์ฒด๊ตฐ๊ณผ ์ ํ•ฉ๋„ ํ•จ์ˆ˜๋ฅผ ์‹œ๊ฐํ™”ํ•œ ๊ทธ๋ฆผ 3.1์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

image

Say, we have the population as shown in the preceding figure,
์•ž์„œ ์ œ์‹œ๋œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์€ ๊ฐœ์ฒด๊ตฐ์ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด๋ด…์‹œ๋‹ค.

and we randomly choose three individuals from it โ€“ A, I, D (refer to the following figure)
๊ทธ๋ฆฌ๊ณ  ๊ทธ ๊ฐœ์ฒด๊ตฐ์—์„œ ๋ฌด์ž‘์œ„๋กœ A, I, D ์„ธ ๊ฐœ์ฒด๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค (๋‹ค์Œ ๊ทธ๋ฆผ์„ ์ฐธ์กฐํ•˜์„ธ์š”).

image

And as a result, the D individual is chosen.
๊ทธ ๊ฒฐ๊ณผ๋กœ, D ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค.

The tournament selection can be implemented the following way:
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:


import random
 
def selection_tournament(individuals, group_size=2):
    selected = []
    for _ in range(len(individuals)):
        candidates = [random.choice(individuals) for _ in range(group_size)]
        selected.append(max(candidates, key=lambda ind: ind.fitness))
    return selected

NOTE: It is worth mentioning that if the group size is two, then the worst individual will never be selected; if the group size is three, then the two worst individuals will never be selected, and so on.
์ฐธ๊ณ : ๊ทธ๋ฃน ํฌ๊ธฐ๊ฐ€ 2๋ผ๋ฉด ๊ฐ€์žฅ ๋‚˜์œ ๊ฐœ์ฒด๋Š” ์ ˆ๋Œ€ ์„ ํƒ๋˜์ง€ ์•Š์œผ๋ฉฐ, ๊ทธ๋ฃน ํฌ๊ธฐ๊ฐ€ 3์ด๋ฉด ๊ฐ€์žฅ ๋‚˜์œ ๋‘ ๊ฐœ์ฒด๋Š” ์ ˆ๋Œ€ ์„ ํƒ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค.

Letโ€™s see this method in action
์ด์ œ ์ด ๋ฐฉ๋ฒ•์„ ์‹ค์ œ๋กœ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค.


import random
from ch3.selection_tournament import selection_tournament
from ch3.individual import Individual
 
POPULATION_SIZE = 5
random.seed(4)
 
population = Individual.create_random_population(POPULATION_SIZE)
selected = selection_tournament(population, group_size=3)
 
print(f"Population: {population}")
print(f"Selected: {selected}")

Result

Population: [A: 3, B: 4, C: 1, D: 6, E: 7]  
Selected: [B: 4, E: 7, B: 4, E: 7, B: 4]

As expected, two worst individuals, A and C were not selected.
์˜ˆ์ƒ๋Œ€๋กœ, ๊ฐ€์žฅ ๋‚ฎ์€ ์ ํ•ฉ๋„๋ฅผ ๊ฐ€์ง„ A์™€ C๋Š” ์„ ํƒ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.

But we have one more interesting result โ€“ the individual D, which has the second fitness score, was also not selected.
ํ•˜์ง€๋งŒ ํฅ๋ฏธ๋กœ์šด ์ ์€, ๋‘ ๋ฒˆ์งธ๋กœ ๋†’์€ ์ ํ•ฉ๋„๋ฅผ ๊ฐ€์ง„ D ๊ฐœ์ฒด๋„ ์„ ํƒ๋˜์ง€ ์•Š์•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

You always have to keep in mind that the tournament selection is a random process, and there is no 100% guarantee that the best individual will be selected.
ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ์€ ํ•ญ์ƒ ๋ฌด์ž‘์œ„์ ์ธ ๊ณผ์ •์ด๋ผ๋Š” ์ ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•˜๋ฉฐ, ๊ฐ€์žฅ ์ข‹์€ ๊ฐœ์ฒด๊ฐ€ ๋ฐ˜๋“œ์‹œ ์„ ํƒ๋œ๋‹ค๋Š” ๋ณด์žฅ์€ ์—†์Šต๋‹ˆ๋‹ค.


๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
subgroupํ•˜์œ„ ์ง‘๋‹จ
typically์ผ๋ฐ˜์ ์œผ๋กœ
implement๊ตฌํ˜„ํ•˜๋‹ค
script์Šคํฌ๋ฆฝํŠธ (์ฝ”๋“œ)
candidateํ›„๋ณด์ž, ํ›„๋ณด ๊ฐœ์ฒด
guarantee๋ณด์žฅ

โœจ 3.2 Proportional selection ๋น„๋ก€ ์„ ํƒ

This method can be illustrated with a roulette wheel.
์ด ๋ฐฉ๋ฒ•์€ ๋ฃฐ๋ › ํœ ์„ ํ†ตํ•ด ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Each individual is assigned a sector of the roulette wheel, the value of which is set proportional to the value of the fitness function of a given individual;
๊ฐ ๊ฐœ์ฒด๋Š” ๋ฃฐ๋ › ํœ ์—์„œ ํ•˜๋‚˜์˜ ๊ตฌ์—ญ์„ ํ• ๋‹น๋ฐ›์œผ๋ฉฐ, ์ด ๊ตฌ์—ญ์˜ ํฌ๊ธฐ๋Š” ํ•ด๋‹น ๊ฐœ์ฒด์˜ ์ ํ•ฉ๋„ ๊ฐ’์— ๋น„๋ก€ํ•˜์—ฌ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค.

therefore, the greater the value of the fitness function, the larger the sector on the roulette wheel.
๋”ฐ๋ผ์„œ ์ ํ•ฉ๋„ ๊ฐ’์ด ํด์ˆ˜๋ก ๋ฃฐ๋ ›์—์„œ ์ฐจ์ง€ํ•˜๋Š” ๊ตฌ์—ญ์ด ๋” ์ปค์ง‘๋‹ˆ๋‹ค.

From this, it follows that the larger the sector on the roulette wheel, the higher the chance that this particular individual will be chosen.
์ด๋กœ ์ธํ•ด ๊ตฌ์—ญ์ด ํด์ˆ˜๋ก ํ•ด๋‹น ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋  ํ™•๋ฅ ์ด ๋†’์•„์ง‘๋‹ˆ๋‹ค.

Letโ€™s examine the script showing the principle of this selection method
์ด ์„ ํƒ ๋ฐฉ๋ฒ•์˜ ์›๋ฆฌ๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค.


Import part

import random
import pandas as pd
import matplotlib.pyplot as plt
from ch3.individual import Individual

Proportional selection

random.seed(4)
POPULATION_SIZE = 5
 
unsorted_population = Individual.create_random_population(POPULATION_SIZE)
population = sorted(unsorted_population, key=lambda ind: ind.fitness, reverse=True)
fitness_sum = sum([ind.fitness for ind in population])
 
fitness_map = {}
for i in population:
    i_prob = round(100 * i.fitness / fitness_sum)
    i_label = f'{i.name} | fitness: {i.fitness}, prob: {i_prob}%'
    fitness_map[i_label] = i.fitness

Visualization

index = ['Sectors']
df = pd.DataFrame(fitness_map, index=index)
df.plot.barh(stacked=True)
 
for _ in range(POPULATION_SIZE):
    plt.axvline(x=random.random() * fitness_sum, linewidth=5, color='black')
 
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.show()

image

Consider the preceding example, as shown in figure where individual B has a fitness score of 9.
์•ž์„  ๊ทธ๋ฆผ์—์„œ B ๊ฐœ์ฒด์˜ ์ ํ•ฉ๋„ ์ ์ˆ˜๊ฐ€ 9๋ผ๊ณ  ๊ฐ€์ •ํ•ด๋ด…์‹œ๋‹ค.

The total number of fitness scores is 17, the sector of individual B occupies 9/17 = 0.5294, that is, 53% of the entire roulette strip.
์ „์ฒด ์ ํ•ฉ๋„ ํ•ฉ์ด 17์ผ ๋•Œ, B ๊ฐœ์ฒด๋Š” ๋ฃฐ๋ › ์ŠคํŠธ๋ฆฝ์˜ 53%์— ํ•ด๋‹นํ•˜๋Š” 9/17 = 0.5294๋งŒํผ์˜ ๊ตฌ์—ญ์„ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค.

The lengths of sectors for other individuals are calculated in the same way.
๋‹ค๋ฅธ ๊ฐœ์ฒด๋“ค์˜ ๊ตฌ์—ญ ๊ธธ์ด๋„ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค.

Black bars are the result of one roulette spin.
๊ฒ€์€ ๋ง‰๋Œ€๋“ค์€ ๋ฃฐ๋ ›์„ ํ•œ ๋ฒˆ ๋Œ๋ฆฐ ๊ฒฐ๊ณผ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.

As a result, we have the following selection result: B, B, B, D, A
๊ทธ ๊ฒฐ๊ณผ๋กœ ์„ ํƒ๋œ ๊ฐœ์ฒด๋“ค์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: B, B, B, D, A

The proportional selection can be implemented in the following way:
๋น„๋ก€ ์„ ํƒ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

import random
 
def selection_proportional(individuals):
    sorted_individuals = sorted(individuals, key=lambda ind: ind.fitness, reverse=True)
    fitness_sum = sum([ind.fitness for ind in individuals])
    selected = []
 
    for _ in range(len(sorted_individuals)):
        shave = random.random() * fitness_sum
        roulette_sum = 0
 
        for ind in sorted_individuals:
            roulette_sum += ind.fitness
            if roulette_sum > shave:
                selected.append(ind)
                break
 
    return selected

Letโ€™s see this selection method in action
์ด ์„ ํƒ ๋ฐฉ๋ฒ•์„ ์‹ค์ œ๋กœ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค.


import random
from ch3.selection_proportional import selection_proportional
from ch3.individual import Individual
 
POPULATION_SIZE = 5
random.seed(4)
 
population = Individual.create_random_population(POPULATION_SIZE)
selected = selection_proportional(population)
 
print(f"Population: {population}")
print(f"Selected: {selected}")

Result

Population: [A: 3, B: 4, C: 1, D: 6, E: 7]
Selected: [E: 7, E: 7, D: 6, A: 3, B: 4]

Proportional selection method has the possibility to select the worst individual, and also has the possibility to not select best individual.
๋น„๋ก€ ์„ ํƒ ๋ฐฉ์‹์€ ์„ฑ๋Šฅ์ด ๊ฐ€์žฅ ๋‚ฎ์€ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋  ์ˆ˜ ์žˆ๊ณ , ๋ฐ˜๋Œ€๋กœ ์„ฑ๋Šฅ์ด ๊ฐ€์žฅ ์ข‹์€ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค.


๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
roulette wheel๋ฃฐ๋ › ํœ 
sector๊ตฌ์—ญ, ์˜์—ญ
proportional๋น„๋ก€ํ•˜๋Š”
assignํ• ๋‹นํ•˜๋‹ค
occupy์ฐจ์ง€ํ•˜๋‹ค

โœจ 3.3 Stochastic universal sampling selection ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง

Stochastic universal sampling selection method is an alternative method of proportional selection.
ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง(Stochastic Universal Sampling, SUS)์€ ๋น„๋ก€ ์„ ํƒ ๋ฐฉ์‹์˜ ๋Œ€์•ˆ์ž…๋‹ˆ๋‹ค.

In this method, the entire roulette wheel is divided into N cutoffs with equal spacing.
์ด ๋ฐฉ์‹์—์„œ๋Š” ์ „์ฒด ๋ฃฐ๋ › ํœ ์ด ๋™์ผํ•œ ๊ฐ„๊ฒฉ์„ ๊ฐ–๋Š” N๊ฐœ์˜ ๊ตฌ๊ฐ„์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค.

This method smooths out the elements of randomness which proportional selection has,
์ด ๋ฐฉ๋ฒ•์€ ๋น„๋ก€ ์„ ํƒ์ด ๊ฐ€์ง€๋Š” ๋ฌด์ž‘์œ„์„ฑ์„ ์™„ํ™”์‹œ์ผœ์ฃผ๋ฉฐ,

and ensures that the individuals are selected according to the following principle โ€“ many good individuals, some average individuals, and a few bad individuals.
์ข‹์€ ๊ฐœ์ฒด๋Š” ๋งŽ์ด, ํ‰๊ท  ๊ฐœ์ฒด๋Š” ์ ๋‹นํžˆ, ๋‚˜์œ ๊ฐœ์ฒด๋Š” ์กฐ๊ธˆ๋งŒ ์„ ํƒ๋˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค.


It can be demonstrated as follows:
์ด ๋ฐฉ์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:


Import part

import random
import pandas as pd
import matplotlib.pyplot as plt
from ch3.individual import Individual

Selection

POPULATION_SIZE = 5
random.seed(9)
 
unsorted_population = Individual.create_random_population(POPULATION_SIZE)
population = sorted(unsorted_population, key=lambda ind: ind.fitness, reverse=True)
fitness_sum = sum([ind.fitness for ind in population])
 
fitness_map = {}
for i in population:
    i_prob = round(100 * i.fitness / fitness_sum)
    i_label = f'{i.name} | fitness: {i.fitness}, prob: {i_prob}%'
    fitness_map[i_label] = i.fitness

Visualization

index = ['Sectors']
df = pd.DataFrame(fitness_map, index=index)
df.plot.barh(stacked=True)
 
distance = fitness_sum / POPULATION_SIZE
shift = random.random() * distance
 
for i in range(POPULATION_SIZE):
    plt.axvline(x=shift + distance * i, linewidth=5, color='black')
 
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.show()

Letโ€™s take a look at the following figure 3.4 for the visualization of Stochastic universal sampling selection:
ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง ์„ ํƒ ๋ฐฉ์‹์„ ์‹œ๊ฐ์ ์œผ๋กœ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ๊ทธ๋ฆผ 3.4๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค.

image

The stochastic universal sampling selection can be implemented in the following way:
ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง ์„ ํƒ ๋ฐฉ์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:


import random
 
def selection_stochastic_universal_sampling(individuals):
    sorted_individuals = sorted(individuals, key=lambda ind: ind.fitness, reverse=True)
    fitness_sum = sum([ind.fitness for ind in individuals])
    distance = fitness_sum / len(individuals)
    shift = random.uniform(0, distance)
    borders = [shift + i * distance for i in range(len(individuals))]
 
    selected = []
    for border in borders:
        i = 0
        roulette_sum = sorted_individuals[i].fitness
        while roulette_sum < border:
            i += 1
            roulette_sum += sorted_individuals[i].fitness
        selected.append(sorted_individuals[i])
 
    return selected

Letโ€™s see the stochastic universal sampling selection method in action
์ด์ œ ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง ์„ ํƒ ๋ฐฉ์‹์„ ์‹ค์ œ๋กœ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค.


import random
from ch3.selection_stochastic_universal_sampling import selection_stochastic_universal_sampling
from ch3.individual import Individual
 
POPULATION_SIZE = 5
random.seed(1)
 
population = Individual.create_random_population(POPULATION_SIZE)
selected = selection_stochastic_universal_sampling(population)
 
print(f"Population: {population}")
print(f"Selected: {selected}")

Result

Population: [A: 2, B: 9, C: 1, D: 4, E: 1]
Selected: [B: 9, B: 9, B: 9, D: 4, C: 1]

NOTE: As with the proportional selection method, the stochastic universal sampling selection has the possibility to select the worst individual, and also has the possibility to not select the best individual.
์ฐธ๊ณ : ๋น„๋ก€ ์„ ํƒ ๋ฐฉ์‹๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง๋„ ์ตœ์•…์˜ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ตœ์ƒ์˜ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค.

Even if it seems contradictory, this approach shows very good results for a particular class of problems.
๋ชจ์ˆœ์ ์œผ๋กœ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ์ด ๋ฐฉ์‹์€ ํŠน์ • ์œ ํ˜•์˜ ๋ฌธ์ œ๋“ค์—์„œ ๋งค์šฐ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค.


๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
stochasticํ™•๋ฅ ์ ์ธ
universal๋ณดํŽธ์ ์ธ
sampling์ƒ˜ํ”Œ๋ง, ํ‘œ๋ณธ ์ถ”์ถœ
alternative๋Œ€์•ˆ
cutoff๊ฒฝ๊ณ„์„ , ์ž˜๋ผ์ง€๋Š” ์ง€์ 
smooth out์™„ํ™”ํ•˜๋‹ค, ๋ถ€๋“œ๋Ÿฝ๊ฒŒ ํ•˜๋‹ค
randomness๋ฌด์ž‘์œ„์„ฑ
ensure๋ณด์žฅํ•˜๋‹ค
sector๊ตฌ์—ญ, ์˜์—ญ
border๊ฒฝ๊ณ„ ์œ„์น˜
contradictory๋ชจ์ˆœ์ ์ธ

โœจ 3.4 Rank selection ์ˆœ์œ„ ์„ ํƒ

Rank selection has the same principle as proportional selection, but individuals of the population are ranked according to the values of their fitness function.
์ˆœ์œ„ ์„ ํƒ(Rank Selection)์€ ๋น„๋ก€ ์„ ํƒ๊ณผ ๊ฐ™์€ ์›๋ฆฌ๋ฅผ ๋”ฐ๋ฅด์ง€๋งŒ, ๊ฐœ์ฒด๋“ค์„ ์ ํ•ฉ๋„ ๊ฐ’์— ๋”ฐ๋ผ ์ˆœ์œ„๋กœ ์ •๋ ฌํ•œ๋‹ค๋Š” ์ ์—์„œ ์ฐจ์ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

This can be thought of as a sorted list of individuals, ordered from the fittest to the least fit, in which each individual is assigned a number that determines its place in the list, called rank.
์ด ๋ฐฉ์‹์€ ๊ฐœ์ฒด๋“ค์„ ๊ฐ€์žฅ ์ ํ•ฉํ•œ ์ˆœ์„œ๋ถ€ํ„ฐ ๋œ ์ ํ•ฉํ•œ ์ˆœ์„œ๋Œ€๋กœ ์ •๋ ฌํ•œ ๋ฆฌ์ŠคํŠธ๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ฐ ๊ฐœ์ฒด๋Š” ํ•ด๋‹น ์ˆœ์„œ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” โ€œ์ˆœ์œ„(rank)โ€œ๋ฅผ ๋ถ€์—ฌ๋ฐ›์Šต๋‹ˆ๋‹ค.

Rank selection smoothens out the large difference between individuals with high fitness values and individuals with low fitness values.
์ˆœ์œ„ ์„ ํƒ์€ ๋†’์€ ์ ํ•ฉ๋„์™€ ๋‚ฎ์€ ์ ํ•ฉ๋„ ์‚ฌ์ด์˜ ํฐ ์ฐจ์ด๋ฅผ ์™„ํ™”ํ•ด์ฃผ๋Š” ํšจ๊ณผ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

Letโ€™s compare proportion selection with rank selection
๋น„๋ก€ ์„ ํƒ๊ณผ ์ˆœ์œ„ ์„ ํƒ์„ ๋น„๊ตํ•ด ๋ด…์‹œ๋‹ค.


Import part

import random
import pandas as pd
import matplotlib.pyplot as plt
from ch3.individual import Individual

Rank selection

POPULATION_SIZE = 5
random.seed(2)
 
unsorted_population = Individual.create_random_population(POPULATION_SIZE)
population = sorted(unsorted_population, key=lambda ind: ind.fitness, reverse=True)
fitness_sum = sum([ind.fitness for ind in population])
 
fitness_map = {}
for i in population:
    i_prob = round(100 * i.fitness / fitness_sum)
    i_label = f'{i.name} | fitness: {i.fitness}, prob: {i_prob}%'
    fitness_map[i_label] = i.fitness
 
proportional_df = pd.DataFrame(fitness_map, index=['Sectors'])
proportional_df.plot.barh(stacked=True)
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.title('Fitness Proportional Sectors')
plt.show()

Rank-based probabilities and visualization

rank_step = 1 / POPULATION_SIZE
rank_sum = sum([1 - rank_step * i for i in range(len(population))])
 
rank_map = {}
for i in range(len(population)):
    i_rank = i + 1
    i_rank_proportion = 1 - i * rank_step
    i_prob = round(100 * i_rank_proportion / rank_sum)
    i_label = f'{population[i].name} | rank: {i_rank}, prob: {i_prob}%'
    rank_map[i_label] = i_rank_proportion
 
rank_df = pd.DataFrame(rank_map, index=['Sectors'])
rank_df.plot.barh(stacked=True)
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.title('Rank Proportional Sectors')
plt.show()

We already saw that proportional selection uses the roulette sectors, as shown in the following figure.
์šฐ๋ฆฌ๋Š” ์ด๋ฏธ ๋น„๋ก€ ์„ ํƒ์ด ๋ฃฐ๋ › ๊ตฌ์—ญ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ์‹์„ ์•ž์„  ๊ทธ๋ฆผ์—์„œ ์‚ดํŽด๋ณธ ๋ฐ” ์žˆ์Šต๋‹ˆ๋‹ค.

image

But for the same population, rank selection will use the following roulette (as shown in the following figure): ํ•˜์ง€๋งŒ ๋™์ผํ•œ ๊ฐœ์ฒด๊ตฐ์— ๋Œ€ํ•ด์„œ, ์ˆœ์œ„ ์„ ํƒ์€ ๋‹ค์Œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์€ ๋ฃฐ๋ ›์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

image

We see that the best individual in rank selection has a lower chance of being selected than it has in the proportional selection,
์ˆœ์œ„ ์„ ํƒ์—์„œ๋Š” ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ๊ฐœ์ฒด๊ฐ€ ๋น„๋ก€ ์„ ํƒ๋ณด๋‹ค ์„ ํƒ๋  ํ™•๋ฅ ์ด ๋” ๋‚ฎ์Šต๋‹ˆ๋‹ค.

and on the contrary, the worst individual, which had no chance of being selected in proportional selection has some positive probability of being selected.
๋ฐ˜๋Œ€๋กœ, ๋น„๋ก€ ์„ ํƒ์—์„œ๋Š” ์„ ํƒ๋  ๊ฐ€๋Šฅ์„ฑ์ด ์—†์—ˆ๋˜ ์ตœ์•…์˜ ๊ฐœ์ฒด๋„ ์ˆœ์œ„ ์„ ํƒ์—์„œ๋Š” ์ผ์ • ํ™•๋ฅ ๋กœ ์„ ํƒ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.


How is rank selection calculated?
์ˆœ์œ„ ์„ ํƒ ํ™•๋ฅ ์€ ์–ด๋–ป๊ฒŒ ๊ณ„์‚ฐ๋ ๊นŒ์š”?


rank_shift := 1 / population_size = 1 / 5 = 0.2  
rank_weight_sum := (population_size + 1) / 2 = 3
 
Nth_individual_weight := (1 โ€“ (rank - 1) ร— rank_shift) / rank_weight_sum ร— 100%

For D we have (1 โ€“ (0 ร— 0.2)) / 3 ร— 100% = 33%
D์˜ ๊ฒฝ์šฐ: (1 โ€“ (0 ร— 0.2)) / 3 ร— 100% = 33%

For E we have (1 โ€“ (1 ร— 0.2)) / 3 ร— 100% = 27%
E์˜ ๊ฒฝ์šฐ: (1 โ€“ (1 ร— 0.2)) / 3 ร— 100% = 27%

For B we have (1 โ€“ (2 ร— 0.2)) / 3 ร— 100% = 20%
B์˜ ๊ฒฝ์šฐ: (1 โ€“ (2 ร— 0.2)) / 3 ร— 100% = 20%


Rank selection implementation

import random
 
def selection_rank(individuals):
    sorted_individuals = sorted(individuals, key=lambda ind: ind.fitness, reverse=True)
    rank_distance = 1 / len(individuals)
    ranks = [(1 - i * rank_distance) for i in range(len(individuals))]
    ranks_sum = sum(ranks)
 
    selected = []
    for _ in range(len(sorted_individuals)):
        shave = random.random() * ranks_sum
        rank_sum = 0
        for i in range(len(sorted_individuals)):
            rank_sum += ranks[i]
            if rank_sum > shave:
                selected.append(sorted_individuals[i])
                break
 
    return selected

Letโ€™s examine how rank selection works
์ˆœ์œ„ ์„ ํƒ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค.


import random
from ch3.selection_rank import selection_rank
from ch3.individual import Individual
 
POPULATION_SIZE = 5
random.seed(18)
 
population = Individual.create_random_population(POPULATION_SIZE)
selected = selection_rank(population)
 
print(f'Population: {population}')
print(f'Selected: {selected}')

Result

Population: [A: 2, B: 1, C: 10, D: 7, E: 5]
Selected: [C: 10, B: 1, E: 5, C: 10, C: 10]

๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
smoothen (smooth out)์™„ํ™”ํ•˜๋‹ค, ๋ถ€๋“œ๋Ÿฝ๊ฒŒ ๋งŒ๋“ค๋‹ค
comparison๋น„๊ต
weight๊ฐ€์ค‘์น˜
probability formulaํ™•๋ฅ  ๊ณต์‹
positive probability0๋ณด๋‹ค ํฐ ํ™•๋ฅ 

โœจ 3.5 Elite selection ์—˜๋ฆฌํŠธ ์„ ํƒ

As we have already seen, none of the selection methods that we have considered โ€“ tournament, proportional, stochastic universal sampling, and rank selection โ€“ guarantee the selection of the best individual.
์ง€๊ธˆ๊นŒ์ง€ ์‚ดํŽด๋ณธ ํ† ๋„ˆ๋จผํŠธ, ๋น„๋ก€, ํ™•๋ฅ ์  ๋ณดํŽธ ์ƒ˜ํ”Œ๋ง, ์ˆœ์œ„ ์„ ํƒ ๋ฐฉ์‹ ์ค‘ ์–ด๋А ๊ฒƒ๋„ ์ตœ์ƒ์˜ ๊ฐœ์ฒด๊ฐ€ ๋ฐ˜๋“œ์‹œ ์„ ํƒ๋œ๋‹ค๋Š” ๋ณด์žฅ์„ ํ•ด์ฃผ์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

The genes of the best individual can be very valuable for the next generations, so there is an approach that protects the best individuals.
๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ๊ฐœ์ฒด์˜ ์œ ์ „์ž๋Š” ๋‹ค์Œ ์„ธ๋Œ€์— ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์ด๋“ค์„ ๋ณดํ˜ธํ•˜๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์ด ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.

This method is called elite selection.
์ด ๋ฐฉ๋ฒ•์„ ์—˜๋ฆฌํŠธ ์„ ํƒ(Elite Selection) ์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค.

Elite selection can be based on another method, such as rank selection, but the main change in this method is the guaranteed inclusion of the best individuals in the selected population.
์—˜๋ฆฌํŠธ ์„ ํƒ์€ ์ˆœ์œ„ ์„ ํƒ ๊ฐ™์€ ๊ธฐ์กด ์„ ํƒ ๋ฐฉ์‹ ์œ„์— ๊ธฐ๋ฐ˜ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ•ต์‹ฌ์ ์ธ ์ฐจ์ด๋Š” ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ๊ฐœ์ฒด๋“ค์„ ๋ฐ˜๋“œ์‹œ ๋‹ค์Œ ์„ธ๋Œ€์— ํฌํ•จ์‹œํ‚จ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค.


Elite selection implementation

import random
 
def selection_rank_with_elite(individuals, elite_size=0):
    sorted_individuals = sorted(individuals, key=lambda ind: ind.fitness, reverse=True)
    rank_distance = 1 / len(individuals)
    ranks = [(1 - i * rank_distance) for i in range(len(individuals))]
    ranks_sum = sum(ranks)
 
    selected = sorted_individuals[0:elite_size]
 
    for i in range(len(sorted_individuals) - elite_size):
        shave = random.random() * ranks_sum
        rank_sum = 0
        for i in range(len(sorted_individuals)):
            rank_sum += ranks[i]
            if rank_sum > shave:
                selected.append(sorted_individuals[i])
                break
 
    return selected

Letโ€™s see this method in action
์ด์ œ ์ด ๋ฐฉ๋ฒ•์„ ์‹ค์ œ๋กœ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค.


import random
from ch3.selection_rank_with_elite import selection_rank_with_elite
from ch3.individual import Individual
 
POPULATION_SIZE = 5
random.seed(3)
 
population = Individual.create_random_population(POPULATION_SIZE)
selected = selection_rank_with_elite(population, elite_size=2)
 
print(f"Population: {population}")
print(f"Selected: {selected}")

Result

Population: [A: 3, B: 9, C: 8, D: 2, E: 5]
Selected: [B: 9, C: 8, A: 3, C: 8, C: 8]

As we see B and C are the two best individuals in population, they form the elite and are being selected by default.
์œ„ ๊ฒฐ๊ณผ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, B์™€ C๋Š” ๊ฐœ์ฒด๊ตฐ์—์„œ ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ๋‘ ๊ฐœ์ฒด์ด๋ฉฐ, ์—˜๋ฆฌํŠธ๋กœ ๊ฐ„์ฃผ๋˜์–ด ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค.

NOTE: Elite selection is a handy method of selection in conditions where an individualโ€™s fitness may degenerate as a result of crossover or mutation.
์ฐธ๊ณ : ์—˜๋ฆฌํŠธ ์„ ํƒ์€ ๊ต์ฐจ๋‚˜ ๋Œ์—ฐ๋ณ€์ด๋กœ ์ธํ•ด ๊ฐœ์ฒด์˜ ์ ํ•ฉ๋„๊ฐ€ ๋–จ์–ด์งˆ ์ˆ˜ ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ์œ ์šฉํ•œ ์„ ํƒ ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค.

We need to protect the best individuals, and try to spread their genes among the population.
์šฐ๋ฆฌ๋Š” ์ตœ์ƒ์˜ ๊ฐœ์ฒด๋ฅผ ๋ณดํ˜ธํ•˜๊ณ , ๊ทธ๋“ค์˜ ์œ ์ „์ž๋ฅผ ๊ฐœ์ฒด๊ตฐ์— ๋„๋ฆฌ ํผ๋œจ๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค.


๐Ÿ”ก ์ฃผ์š” ๋‹จ์–ด

์˜์–ด ๋‹จ์–ดํ•œ๊ธ€ ๋œป
guarantee๋ณด์žฅํ•˜๋‹ค
inclusionํฌํ•จ
protect๋ณดํ˜ธํ•˜๋‹ค
valuable๊ฐ€์น˜ ์žˆ๋Š”
by default๊ธฐ๋ณธ์ ์œผ๋กœ, ์ž๋™์œผ๋กœ
degenerateํ‡ดํ™”ํ•˜๋‹ค, ์•ฝํ™”๋˜๋‹ค
handy์œ ์šฉํ•œ

โœจ 3 - Conclusion ๊ฒฐ๋ก 

Selection is a very important part of the evolution process; every individual aims to generate an offspring.
์„ ํƒ์€ ์ง„ํ™” ๊ณผ์ •์—์„œ ๋งค์šฐ ์ค‘์š”ํ•œ ๋ถ€๋ถ„์ด๋ฉฐ, ๋ชจ๋“  ๊ฐœ์ฒด๋Š” ์ž์†์„ ๋‚จ๊ธฐ๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค.

The selection process is random by nature.
์„ ํƒ ๊ณผ์ •์€ ๋ณธ์งˆ์ ์œผ๋กœ ๋ฌด์ž‘์œ„์ ์ž…๋‹ˆ๋‹ค.

We have studied several selection methods, each of which has its pros and cons.
์šฐ๋ฆฌ๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์„ ํƒ ๋ฐฉ๋ฒ•๋“ค์„ ์‚ดํŽด๋ณด์•˜๊ณ , ๊ฐ๊ฐ์˜ ๋ฐฉ์‹์€ ์žฅ๋‹จ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.

You can use one of these methods or any modification.
์ด ์ค‘ ํ•˜๋‚˜์˜ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ๊ณ , ์ˆ˜์ •๋œ ๋ฐฉ์‹๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

In the next chapter, we will study the next part of evolution called crossover.
๋‹ค์Œ ์žฅ์—์„œ๋Š” ์ง„ํ™”์˜ ๋‹ค์Œ ๋‹จ๊ณ„์ธ ๊ต์ฐจ(crossover)์— ๋Œ€ํ•ด ๋ฐฐ์›Œ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.


โœจ 3 - Points to remember: ๊ธฐ์–ตํ•  ์ 

Each selection method has the following principle โ€” adapted individuals have a higher possibility to be selected than the abandoned ones.
๊ฐ ์„ ํƒ ๋ฐฉ์‹์€ ๋‹ค์Œ ์›์น™์„ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค โ€” ๋” ์ ์‘๋œ ๊ฐœ์ฒด๊ฐ€ ๊ทธ๋ ‡์ง€ ๋ชปํ•œ ๊ฐœ์ฒด๋ณด๋‹ค ์„ ํƒ๋  ๊ฐ€๋Šฅ์„ฑ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค.

Even abandoned individuals can have something valuable in their genes, so we have to leave a positive probability for them to be selected.
ํ•˜์ง€๋งŒ ๋œ ์ ํ•ฉํ•œ ๊ฐœ์ฒด๋“ค๋„ ์œ ์ „์ ์œผ๋กœ ๊ฐ€์น˜ ์žˆ๋Š” ์ •๋ณด๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ๊ทธ๋“ค์ด ์„ ํƒ๋  ์ˆ˜ ์žˆ๋Š” ์ผ์ •ํ•œ ํ™•๋ฅ ์€ ๋ฐ˜๋“œ์‹œ ๋‚จ๊ฒจ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.


โœจ 3 - Multiple choice questions ๊ฐ๊ด€์‹ ๋ฌธ์ œ

1. Which selection method guarantees that the best individual will be selected?
์–ด๋–ค ์„ ํƒ ๋ฐฉ์‹์ด ์ตœ์ƒ์˜ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋  ๊ฒƒ์„ ๋ณด์žฅํ•˜๋‚˜์š”?

  • Rank selection
    ์ˆœ์œ„ ์„ ํƒ
  • Elite selection โœ…
    ์—˜๋ฆฌํŠธ ์„ ํƒ โœ…
  • Tournament selection
    ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ
  • Proportional selection
    ๋น„๋ก€ ์„ ํƒ

2. Which selection method guarantees that the worst individual will not be selected?
์–ด๋–ค ์„ ํƒ ๋ฐฉ์‹์ด ์ตœ์•…์˜ ๊ฐœ์ฒด๊ฐ€ ์„ ํƒ๋˜์ง€ ์•Š์„ ๊ฒƒ์„ ๋ณด์žฅํ•˜๋‚˜์š”?

  • Rank selection
    ์ˆœ์œ„ ์„ ํƒ
  • Elite selection
    ์—˜๋ฆฌํŠธ ์„ ํƒ
  • Tournament selection โœ…
    ํ† ๋„ˆ๋จผํŠธ ์„ ํƒ โœ…
  • Proportional selection
    ๋น„๋ก€ ์„ ํƒ

3. Say we have the following population: A: 3, B: 9, C: 8, D: 2, E: 5.
๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฐœ์ฒด๊ตฐ์ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ์‹œ๋‹ค: A: 3, B: 9, C: 8, D: 2, E: 5

And we have selected population: B: 9, C: 8, A: 3, C: 8, C: 8.
์„ ํƒ๋œ ๊ฐœ์ฒด๊ตฐ์ด ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค๋ฉด: B: 9, C: 8, A: 3, C: 8, C: 8

What selection method have we used?
์–ด๋–ค ์„ ํƒ ๋ฐฉ์‹์ด ์‚ฌ์šฉ๋˜์—ˆ์„๊นŒ์š”?

  • Elite selection
    ์—˜๋ฆฌํŠธ ์„ ํƒ

  • Proportional selection
    ๋น„๋ก€ ์„ ํƒ

  • Itโ€™s impossible to answer this question, it can be any of selection methods โœ…
    ์ด ์งˆ๋ฌธ์—๋Š” ๋‹ตํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์–ด๋–ค ๋ฐฉ์‹์ด๋“  ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. โœ…

  • ์ตœ์ƒ์œ„ ๊ฐœ์ฒด B์™€ C๊ฐ€ ํ•ญ์ƒ ํฌํ•จ๋˜์–ด ์žˆ๋Š” ๊ฒƒ์œผ๋กœ ๋ณด์•„ ์—˜๋ฆฌํŠธ ์„ ํƒ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ํผ