Neural Network Basic Implementation

Author

Deri Siswara

1. Setup dan Import Library

Mari kita import library yang diperlukan untuk implementasi neural network.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

# Set random seed untuk reproduktifitas
np.random.seed(42)
tf.random.set_seed(42)

print(f"TensorFlow version: {tf.__version__}")
TensorFlow version: 2.15.0

2. Generate Data Sederhana untuk Percobaan

Kita akan membuat data sederhana untuk demonstrasi neural network. Untuk perceptron, kita akan membuat data yang linearly separable, dan untuk multi-layer neural network, kita akan menggunakan data yang lebih kompleks.

# Data untuk Perceptron (linearly separable)
def generate_linear_data(n_samples=100):
    # Generate random points
    X = np.random.randn(n_samples, 2)
    # Create a simple linear boundary: y = x
    y = (X[:, 0] + X[:, 1] > 0).astype(int)
    return X, y

# Data untuk Multi-layer NN (non-linearly separable)
def generate_circular_data(n_samples=100):
    # Generate random points
    X = np.random.randn(n_samples, 2)
    # Create a circular boundary: points inside a circle are class 1
    y = (X[:, 0]**2 + X[:, 1]**2 < 2).astype(int)
    return X, y

# Generate data
X_linear, y_linear = generate_linear_data(200)
X_circular, y_circular = generate_circular_data(200)

# Visualisasi data
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.scatter(X_linear[:, 0], X_linear[:, 1], c=y_linear, cmap='viridis', alpha=0.8)
plt.title('Data Linear (untuk Perceptron)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.grid(True)

plt.subplot(1, 2, 2)
plt.scatter(X_circular[:, 0], X_circular[:, 1], c=y_circular, cmap='viridis', alpha=0.8)
plt.title('Data Non-Linear (untuk Multi-layer NN)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.grid(True)

plt.tight_layout()
plt.show()

3. Implementasi Perceptron Sederhana

Perceptron adalah neural network paling sederhana dengan satu neuron saja. Kita akan mengimplementasikan perceptron menggunakan TensorFlow untuk data klasifikasi linear.

# Split data untuk training dan testing
X_train, X_test, y_train, y_test = train_test_split(X_linear, y_linear, test_size=0.2, random_state=42)

# Implementasi Perceptron dengan TensorFlow/Keras
perceptron_model = keras.Sequential([
    # Input layer + Output layer (perceptron hanya memiliki 1 layer)
    # Input shape: 2 fitur, Output: 1 neuron dengan aktivasi sigmoid
    keras.layers.Dense(1, activation='sigmoid', input_shape=(2,))
])

# Compile model
perceptron_model.compile(
    optimizer='sgd',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
perceptron_model.summary()

# Training model
history_perceptron = perceptron_model.fit(
    X_train, y_train,
    epochs=50,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
perceptron_loss, perceptron_accuracy = perceptron_model.evaluate(X_test, y_test)
print(f"\nPerceptron Test Accuracy: {perceptron_accuracy:.4f}")
WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\backend.py:873: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\optimizers\__init__.py:309: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 1)                 3         
                                                                 
=================================================================
Total params: 3 (12.00 Byte)
Trainable params: 3 (12.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/50
WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\utils\tf_utils.py:492: The name tf.ragged.RaggedTensorValue is deprecated. Please use tf.compat.v1.ragged.RaggedTensorValue instead.

WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\engine\base_layer_utils.py:384: The name tf.executing_eagerly_outside_functions is deprecated. Please use tf.compat.v1.executing_eagerly_outside_functions instead.

8/8 [==============================] - 0s 13ms/step - loss: 0.8835 - accuracy: 0.4062 - val_loss: 0.9608 - val_accuracy: 0.2188
Epoch 2/50
8/8 [==============================] - 0s 3ms/step - loss: 0.8671 - accuracy: 0.4062 - val_loss: 0.9444 - val_accuracy: 0.2500
Epoch 3/50
8/8 [==============================] - 0s 3ms/step - loss: 0.8511 - accuracy: 0.4297 - val_loss: 0.9283 - val_accuracy: 0.2500
Epoch 4/50
8/8 [==============================] - 0s 3ms/step - loss: 0.8357 - accuracy: 0.4375 - val_loss: 0.9127 - val_accuracy: 0.2500
Epoch 5/50
8/8 [==============================] - 0s 3ms/step - loss: 0.8207 - accuracy: 0.4453 - val_loss: 0.8974 - val_accuracy: 0.2812
Epoch 6/50
8/8 [==============================] - 0s 3ms/step - loss: 0.8062 - accuracy: 0.4453 - val_loss: 0.8825 - val_accuracy: 0.3125
Epoch 7/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7920 - accuracy: 0.4531 - val_loss: 0.8680 - val_accuracy: 0.3438
Epoch 8/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7783 - accuracy: 0.4688 - val_loss: 0.8538 - val_accuracy: 0.3438
Epoch 9/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7650 - accuracy: 0.4688 - val_loss: 0.8400 - val_accuracy: 0.3750
Epoch 10/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7522 - accuracy: 0.4844 - val_loss: 0.8266 - val_accuracy: 0.4062
Epoch 11/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7394 - accuracy: 0.4922 - val_loss: 0.8134 - val_accuracy: 0.4062
Epoch 12/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7272 - accuracy: 0.5078 - val_loss: 0.8007 - val_accuracy: 0.4062
Epoch 13/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7155 - accuracy: 0.5391 - val_loss: 0.7882 - val_accuracy: 0.4062
Epoch 14/50
8/8 [==============================] - 0s 3ms/step - loss: 0.7040 - accuracy: 0.5391 - val_loss: 0.7761 - val_accuracy: 0.4688
Epoch 15/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6928 - accuracy: 0.5547 - val_loss: 0.7643 - val_accuracy: 0.4688
Epoch 16/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6820 - accuracy: 0.5547 - val_loss: 0.7528 - val_accuracy: 0.4688
Epoch 17/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6716 - accuracy: 0.5547 - val_loss: 0.7416 - val_accuracy: 0.4688
Epoch 18/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6615 - accuracy: 0.5547 - val_loss: 0.7307 - val_accuracy: 0.4688
Epoch 19/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6517 - accuracy: 0.5547 - val_loss: 0.7200 - val_accuracy: 0.5000
Epoch 20/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6421 - accuracy: 0.5703 - val_loss: 0.7097 - val_accuracy: 0.5000
Epoch 21/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6328 - accuracy: 0.5859 - val_loss: 0.6996 - val_accuracy: 0.5000
Epoch 22/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6239 - accuracy: 0.5938 - val_loss: 0.6898 - val_accuracy: 0.5625
Epoch 23/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6151 - accuracy: 0.6016 - val_loss: 0.6803 - val_accuracy: 0.5625
Epoch 24/50
8/8 [==============================] - 0s 3ms/step - loss: 0.6067 - accuracy: 0.6016 - val_loss: 0.6710 - val_accuracy: 0.5625
Epoch 25/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5985 - accuracy: 0.6094 - val_loss: 0.6619 - val_accuracy: 0.5938
Epoch 26/50
8/8 [==============================] - 0s 4ms/step - loss: 0.5905 - accuracy: 0.6094 - val_loss: 0.6531 - val_accuracy: 0.5938
Epoch 27/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5829 - accuracy: 0.6328 - val_loss: 0.6445 - val_accuracy: 0.5938
Epoch 28/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5754 - accuracy: 0.6406 - val_loss: 0.6361 - val_accuracy: 0.5938
Epoch 29/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5682 - accuracy: 0.6562 - val_loss: 0.6280 - val_accuracy: 0.6250
Epoch 30/50
8/8 [==============================] - 0s 2ms/step - loss: 0.5612 - accuracy: 0.6719 - val_loss: 0.6201 - val_accuracy: 0.6250
Epoch 31/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5543 - accuracy: 0.6719 - val_loss: 0.6123 - val_accuracy: 0.6250
Epoch 32/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5477 - accuracy: 0.6875 - val_loss: 0.6048 - val_accuracy: 0.6250
Epoch 33/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5413 - accuracy: 0.6875 - val_loss: 0.5975 - val_accuracy: 0.6562
Epoch 34/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5351 - accuracy: 0.6953 - val_loss: 0.5903 - val_accuracy: 0.6875
Epoch 35/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5290 - accuracy: 0.6953 - val_loss: 0.5833 - val_accuracy: 0.6875
Epoch 36/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5231 - accuracy: 0.7031 - val_loss: 0.5765 - val_accuracy: 0.7188
Epoch 37/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5174 - accuracy: 0.7031 - val_loss: 0.5699 - val_accuracy: 0.7500
Epoch 38/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5118 - accuracy: 0.7266 - val_loss: 0.5634 - val_accuracy: 0.7812
Epoch 39/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5065 - accuracy: 0.7500 - val_loss: 0.5572 - val_accuracy: 0.7812
Epoch 40/50
8/8 [==============================] - 0s 3ms/step - loss: 0.5012 - accuracy: 0.7656 - val_loss: 0.5510 - val_accuracy: 0.7812
Epoch 41/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4961 - accuracy: 0.7656 - val_loss: 0.5450 - val_accuracy: 0.8125
Epoch 42/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4912 - accuracy: 0.7656 - val_loss: 0.5392 - val_accuracy: 0.8125
Epoch 43/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4864 - accuracy: 0.7812 - val_loss: 0.5335 - val_accuracy: 0.8125
Epoch 44/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4818 - accuracy: 0.7891 - val_loss: 0.5279 - val_accuracy: 0.8125
Epoch 45/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4771 - accuracy: 0.7891 - val_loss: 0.5225 - val_accuracy: 0.8125
Epoch 46/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4727 - accuracy: 0.7969 - val_loss: 0.5171 - val_accuracy: 0.8125
Epoch 47/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4683 - accuracy: 0.8125 - val_loss: 0.5120 - val_accuracy: 0.8125
Epoch 48/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4641 - accuracy: 0.8203 - val_loss: 0.5069 - val_accuracy: 0.8438
Epoch 49/50
8/8 [==============================] - 0s 3ms/step - loss: 0.4601 - accuracy: 0.8281 - val_loss: 0.5019 - val_accuracy: 0.8438
Epoch 50/50
8/8 [==============================] - 0s 4ms/step - loss: 0.4561 - accuracy: 0.8281 - val_loss: 0.4971 - val_accuracy: 0.8438
2/2 [==============================] - 0s 2ms/step - loss: 0.4554 - accuracy: 0.8500

Perceptron Test Accuracy: 0.8500
# Visualisasi hasil training
plt.figure(figsize=(12, 5))

# Plot history loss
plt.subplot(1, 2, 1)
plt.plot(history_perceptron.history['loss'], label='Training Loss')
plt.plot(history_perceptron.history['val_loss'], label='Validation Loss')
plt.title('Perceptron Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)

# Plot history accuracy
plt.subplot(1, 2, 2)
plt.plot(history_perceptron.history['accuracy'], label='Training Accuracy')
plt.plot(history_perceptron.history['val_accuracy'], label='Validation Accuracy')
plt.title('Perceptron Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid(True)

plt.tight_layout()
plt.show()

# Visualisasi boundary keputusan dari perceptron
def plot_decision_boundary(model, X, y):
    # Set min and max values for both features
    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
    
    # Create a mesh grid
    xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01),
                         np.arange(y_min, y_max, 0.01))
    
    # Flatten the mesh grid to pass to model
    mesh_points = np.c_[xx.ravel(), yy.ravel()]
    
    # Predict with the model
    Z = model.predict(mesh_points)
    Z = Z.reshape(xx.shape)
    
    # Plot the contour and training examples
    plt.figure(figsize=(10, 8))
    plt.contourf(xx, yy, Z, alpha=0.8, cmap='viridis')
    plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis', edgecolors='k', alpha=0.8)
    plt.xlabel('Feature 1')
    plt.ylabel('Feature 2')
    plt.title('Decision Boundary')
    plt.grid(True)
    plt.colorbar()
    plt.show()

# Plot decision boundary for perceptron
plot_decision_boundary(perceptron_model, X_linear, y_linear)
18546/18546 [==============================] - 8s 407us/step

4. Implementasi Multi-layer Neural Network

Sekarang kita akan mengimplementasikan neural network dengan multiple layers untuk menangani data yang tidak linear.

# Split data untuk training dan testing
X_train, X_test, y_train, y_test = train_test_split(X_circular, y_circular, test_size=0.2, random_state=42)

# Implementasi Multi-layer NN dengan TensorFlow/Keras
multilayer_model = keras.Sequential([
    # Input layer: 2 fitur, 8 neuron dengan aktivasi ReLU
    keras.layers.Dense(8, activation='relu', input_shape=(2,)),
    
    # Hidden layer: 4 neuron dengan aktivasi ReLU
    keras.layers.Dense(4, activation='relu'),
    
    # Output layer: 1 neuron dengan aktivasi sigmoid untuk klasifikasi biner
    keras.layers.Dense(1, activation='sigmoid')
])

# Compile model
multilayer_model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
multilayer_model.summary()

# Training model
history_multilayer = multilayer_model.fit(
    X_train, y_train,
    epochs=100,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
multilayer_loss, multilayer_accuracy = multilayer_model.evaluate(X_test, y_test)
print(f"\nMulti-layer NN Test Accuracy: {multilayer_accuracy:.4f}")
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_1 (Dense)             (None, 8)                 24        
                                                                 
 dense_2 (Dense)             (None, 4)                 36        
                                                                 
 dense_3 (Dense)             (None, 1)                 5         
                                                                 
=================================================================
Total params: 65 (260.00 Byte)
Trainable params: 65 (260.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
8/8 [==============================] - 0s 12ms/step - loss: 0.7552 - accuracy: 0.3906 - val_loss: 0.6717 - val_accuracy: 0.5938
Epoch 2/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7463 - accuracy: 0.3984 - val_loss: 0.6690 - val_accuracy: 0.6250
Epoch 3/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7386 - accuracy: 0.4062 - val_loss: 0.6665 - val_accuracy: 0.6250
Epoch 4/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7309 - accuracy: 0.4297 - val_loss: 0.6643 - val_accuracy: 0.6562
Epoch 5/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7247 - accuracy: 0.4531 - val_loss: 0.6622 - val_accuracy: 0.7188
Epoch 6/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7178 - accuracy: 0.4531 - val_loss: 0.6607 - val_accuracy: 0.7812
Epoch 7/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7118 - accuracy: 0.4531 - val_loss: 0.6595 - val_accuracy: 0.7812
Epoch 8/100
8/8 [==============================] - 0s 3ms/step - loss: 0.7072 - accuracy: 0.5000 - val_loss: 0.6584 - val_accuracy: 0.7500
Epoch 9/100
8/8 [==============================] - 0s 4ms/step - loss: 0.7014 - accuracy: 0.5078 - val_loss: 0.6580 - val_accuracy: 0.7812
Epoch 10/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6977 - accuracy: 0.5156 - val_loss: 0.6581 - val_accuracy: 0.7500
Epoch 11/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6929 - accuracy: 0.5625 - val_loss: 0.6583 - val_accuracy: 0.7188
Epoch 12/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6884 - accuracy: 0.5859 - val_loss: 0.6585 - val_accuracy: 0.7500
Epoch 13/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6842 - accuracy: 0.6172 - val_loss: 0.6586 - val_accuracy: 0.7812
Epoch 14/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6799 - accuracy: 0.6094 - val_loss: 0.6591 - val_accuracy: 0.7812
Epoch 15/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6753 - accuracy: 0.6172 - val_loss: 0.6590 - val_accuracy: 0.7812
Epoch 16/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6712 - accuracy: 0.6172 - val_loss: 0.6592 - val_accuracy: 0.7188
Epoch 17/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6674 - accuracy: 0.6250 - val_loss: 0.6599 - val_accuracy: 0.7188
Epoch 18/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6630 - accuracy: 0.6406 - val_loss: 0.6599 - val_accuracy: 0.6875
Epoch 19/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6595 - accuracy: 0.6875 - val_loss: 0.6599 - val_accuracy: 0.7188
Epoch 20/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6565 - accuracy: 0.6953 - val_loss: 0.6600 - val_accuracy: 0.7188
Epoch 21/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6537 - accuracy: 0.7031 - val_loss: 0.6596 - val_accuracy: 0.7188
Epoch 22/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6508 - accuracy: 0.7031 - val_loss: 0.6597 - val_accuracy: 0.7188
Epoch 23/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6478 - accuracy: 0.7344 - val_loss: 0.6595 - val_accuracy: 0.7188
Epoch 24/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6449 - accuracy: 0.7656 - val_loss: 0.6589 - val_accuracy: 0.7188
Epoch 25/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6421 - accuracy: 0.7656 - val_loss: 0.6580 - val_accuracy: 0.7188
Epoch 26/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6394 - accuracy: 0.7578 - val_loss: 0.6574 - val_accuracy: 0.6875
Epoch 27/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6364 - accuracy: 0.7500 - val_loss: 0.6566 - val_accuracy: 0.6875
Epoch 28/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6341 - accuracy: 0.7500 - val_loss: 0.6564 - val_accuracy: 0.6875
Epoch 29/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6308 - accuracy: 0.7578 - val_loss: 0.6547 - val_accuracy: 0.6875
Epoch 30/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6279 - accuracy: 0.7656 - val_loss: 0.6537 - val_accuracy: 0.6875
Epoch 31/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6248 - accuracy: 0.8047 - val_loss: 0.6522 - val_accuracy: 0.6875
Epoch 32/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6217 - accuracy: 0.8047 - val_loss: 0.6503 - val_accuracy: 0.6875
Epoch 33/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6182 - accuracy: 0.7969 - val_loss: 0.6484 - val_accuracy: 0.6875
Epoch 34/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6147 - accuracy: 0.8047 - val_loss: 0.6453 - val_accuracy: 0.6875
Epoch 35/100
8/8 [==============================] - 0s 4ms/step - loss: 0.6108 - accuracy: 0.8047 - val_loss: 0.6429 - val_accuracy: 0.6875
Epoch 36/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6073 - accuracy: 0.8047 - val_loss: 0.6403 - val_accuracy: 0.6875
Epoch 37/100
8/8 [==============================] - 0s 3ms/step - loss: 0.6028 - accuracy: 0.8125 - val_loss: 0.6380 - val_accuracy: 0.6875
Epoch 38/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5982 - accuracy: 0.8125 - val_loss: 0.6347 - val_accuracy: 0.6875
Epoch 39/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5936 - accuracy: 0.8203 - val_loss: 0.6308 - val_accuracy: 0.6875
Epoch 40/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5881 - accuracy: 0.8203 - val_loss: 0.6269 - val_accuracy: 0.6875
Epoch 41/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5827 - accuracy: 0.8203 - val_loss: 0.6237 - val_accuracy: 0.6875
Epoch 42/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5765 - accuracy: 0.8203 - val_loss: 0.6180 - val_accuracy: 0.6875
Epoch 43/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5703 - accuracy: 0.8281 - val_loss: 0.6140 - val_accuracy: 0.6875
Epoch 44/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5630 - accuracy: 0.8438 - val_loss: 0.6089 - val_accuracy: 0.6875
Epoch 45/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5564 - accuracy: 0.8438 - val_loss: 0.6023 - val_accuracy: 0.7188
Epoch 46/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5490 - accuracy: 0.8594 - val_loss: 0.5964 - val_accuracy: 0.6875
Epoch 47/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5419 - accuracy: 0.8594 - val_loss: 0.5903 - val_accuracy: 0.6875
Epoch 48/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5342 - accuracy: 0.8672 - val_loss: 0.5836 - val_accuracy: 0.7188
Epoch 49/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5274 - accuracy: 0.8672 - val_loss: 0.5761 - val_accuracy: 0.7188
Epoch 50/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5200 - accuracy: 0.8672 - val_loss: 0.5681 - val_accuracy: 0.7812
Epoch 51/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5118 - accuracy: 0.8750 - val_loss: 0.5626 - val_accuracy: 0.7812
Epoch 52/100
8/8 [==============================] - 0s 3ms/step - loss: 0.5048 - accuracy: 0.8750 - val_loss: 0.5570 - val_accuracy: 0.7500
Epoch 53/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4970 - accuracy: 0.8750 - val_loss: 0.5499 - val_accuracy: 0.7812
Epoch 54/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4894 - accuracy: 0.8906 - val_loss: 0.5436 - val_accuracy: 0.8125
Epoch 55/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4826 - accuracy: 0.8906 - val_loss: 0.5396 - val_accuracy: 0.8125
Epoch 56/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4757 - accuracy: 0.8906 - val_loss: 0.5342 - val_accuracy: 0.8125
Epoch 57/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4686 - accuracy: 0.8984 - val_loss: 0.5286 - val_accuracy: 0.8125
Epoch 58/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4619 - accuracy: 0.8984 - val_loss: 0.5243 - val_accuracy: 0.8125
Epoch 59/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4555 - accuracy: 0.8984 - val_loss: 0.5189 - val_accuracy: 0.8438
Epoch 60/100
8/8 [==============================] - 0s 4ms/step - loss: 0.4486 - accuracy: 0.8984 - val_loss: 0.5143 - val_accuracy: 0.8125
Epoch 61/100
8/8 [==============================] - 0s 4ms/step - loss: 0.4420 - accuracy: 0.8984 - val_loss: 0.5098 - val_accuracy: 0.8125
Epoch 62/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4359 - accuracy: 0.8984 - val_loss: 0.5061 - val_accuracy: 0.8125
Epoch 63/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4292 - accuracy: 0.9062 - val_loss: 0.5019 - val_accuracy: 0.8125
Epoch 64/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4230 - accuracy: 0.9062 - val_loss: 0.4981 - val_accuracy: 0.8125
Epoch 65/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4167 - accuracy: 0.9141 - val_loss: 0.4938 - val_accuracy: 0.8125
Epoch 66/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4106 - accuracy: 0.9141 - val_loss: 0.4900 - val_accuracy: 0.8125
Epoch 67/100
8/8 [==============================] - 0s 3ms/step - loss: 0.4054 - accuracy: 0.9141 - val_loss: 0.4877 - val_accuracy: 0.8125
Epoch 68/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3990 - accuracy: 0.9141 - val_loss: 0.4831 - val_accuracy: 0.8125
Epoch 69/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3927 - accuracy: 0.9219 - val_loss: 0.4768 - val_accuracy: 0.8125
Epoch 70/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3870 - accuracy: 0.9297 - val_loss: 0.4702 - val_accuracy: 0.8125
Epoch 71/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3816 - accuracy: 0.9219 - val_loss: 0.4676 - val_accuracy: 0.8125
Epoch 72/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3757 - accuracy: 0.9219 - val_loss: 0.4612 - val_accuracy: 0.8125
Epoch 73/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3704 - accuracy: 0.9297 - val_loss: 0.4544 - val_accuracy: 0.8125
Epoch 74/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3650 - accuracy: 0.9375 - val_loss: 0.4511 - val_accuracy: 0.8125
Epoch 75/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3595 - accuracy: 0.9375 - val_loss: 0.4467 - val_accuracy: 0.8125
Epoch 76/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3542 - accuracy: 0.9375 - val_loss: 0.4423 - val_accuracy: 0.8125
Epoch 77/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3489 - accuracy: 0.9453 - val_loss: 0.4356 - val_accuracy: 0.8125
Epoch 78/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3439 - accuracy: 0.9531 - val_loss: 0.4307 - val_accuracy: 0.8125
Epoch 79/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3387 - accuracy: 0.9531 - val_loss: 0.4248 - val_accuracy: 0.8125
Epoch 80/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3338 - accuracy: 0.9609 - val_loss: 0.4202 - val_accuracy: 0.8125
Epoch 81/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3291 - accuracy: 0.9609 - val_loss: 0.4139 - val_accuracy: 0.8438
Epoch 82/100
8/8 [==============================] - 0s 4ms/step - loss: 0.3241 - accuracy: 0.9609 - val_loss: 0.4101 - val_accuracy: 0.8438
Epoch 83/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3196 - accuracy: 0.9609 - val_loss: 0.4053 - val_accuracy: 0.8438
Epoch 84/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3149 - accuracy: 0.9609 - val_loss: 0.3994 - val_accuracy: 0.8438
Epoch 85/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3101 - accuracy: 0.9609 - val_loss: 0.3962 - val_accuracy: 0.8438
Epoch 86/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3060 - accuracy: 0.9609 - val_loss: 0.3947 - val_accuracy: 0.8438
Epoch 87/100
8/8 [==============================] - 0s 3ms/step - loss: 0.3014 - accuracy: 0.9609 - val_loss: 0.3902 - val_accuracy: 0.8438
Epoch 88/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2972 - accuracy: 0.9609 - val_loss: 0.3865 - val_accuracy: 0.8438
Epoch 89/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2929 - accuracy: 0.9609 - val_loss: 0.3818 - val_accuracy: 0.8438
Epoch 90/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2895 - accuracy: 0.9688 - val_loss: 0.3740 - val_accuracy: 0.8438
Epoch 91/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2848 - accuracy: 0.9688 - val_loss: 0.3722 - val_accuracy: 0.8438
Epoch 92/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2809 - accuracy: 0.9688 - val_loss: 0.3667 - val_accuracy: 0.8438
Epoch 93/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2770 - accuracy: 0.9688 - val_loss: 0.3625 - val_accuracy: 0.8438
Epoch 94/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2731 - accuracy: 0.9688 - val_loss: 0.3579 - val_accuracy: 0.8438
Epoch 95/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2694 - accuracy: 0.9688 - val_loss: 0.3530 - val_accuracy: 0.8438
Epoch 96/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2663 - accuracy: 0.9688 - val_loss: 0.3494 - val_accuracy: 0.8750
Epoch 97/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2622 - accuracy: 0.9688 - val_loss: 0.3454 - val_accuracy: 0.8438
Epoch 98/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2586 - accuracy: 0.9688 - val_loss: 0.3443 - val_accuracy: 0.8438
Epoch 99/100
8/8 [==============================] - 0s 4ms/step - loss: 0.2552 - accuracy: 0.9688 - val_loss: 0.3391 - val_accuracy: 0.8750
Epoch 100/100
8/8 [==============================] - 0s 3ms/step - loss: 0.2517 - accuracy: 0.9688 - val_loss: 0.3351 - val_accuracy: 0.8750
2/2 [==============================] - 0s 3ms/step - loss: 0.2602 - accuracy: 0.9250

Multi-layer NN Test Accuracy: 0.9250
# Visualisasi hasil training
plt.figure(figsize=(12, 5))

# Plot history loss
plt.subplot(1, 2, 1)
plt.plot(history_multilayer.history['loss'], label='Training Loss')
plt.plot(history_multilayer.history['val_loss'], label='Validation Loss')
plt.title('Multi-layer NN Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)

# Plot history accuracy
plt.subplot(1, 2, 2)
plt.plot(history_multilayer.history['accuracy'], label='Training Accuracy')
plt.plot(history_multilayer.history['val_accuracy'], label='Validation Accuracy')
plt.title('Multi-layer NN Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid(True)

plt.tight_layout()
plt.show()

# Plot decision boundary untuk multi-layer NN
plot_decision_boundary(multilayer_model, X_circular, y_circular)
16873/16873 [==============================] - 7s 432us/step

5. Implementasi Neural Network dengan Data Tabular

Sekarang kita akan menggunakan dataset tabular yang lebih realistis. Kita akan menggunakan dataset Iris yang terkenal.

# Load dataset Iris (sebagai contoh dataset tabular)
from sklearn.datasets import load_iris

iris = load_iris()
X = iris.data
y = iris.target

# Ubah ke DataFrame untuk visualisasi
iris_df = pd.DataFrame(X, columns=iris.feature_names)
iris_df['target'] = y
iris_df['species'] = iris_df['target'].map({
    0: 'setosa', 
    1: 'versicolor', 
    2: 'virginica'
})

# Lihat informasi dataset
print("Dataset Shape:", iris_df.shape)
print("\nSample data:")
print(iris_df.head())

# Statistik deskriptif
print("\nStatistik Deskriptif:")
print(iris_df.describe())

# Visualisasi distribusi kelas
plt.figure(figsize=(10, 6))
sns.countplot(x='species', data=iris_df)
plt.title('Distribusi Kelas Dataset Iris')
plt.show()

# Visualisasi data
plt.figure(figsize=(12, 10))
sns.pairplot(iris_df, hue='species', height=2.5)
plt.suptitle('Pairplot Dataset Iris', y=1.02)
plt.show()
Dataset Shape: (150, 6)

Sample data:
   sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)  \
0                5.1               3.5                1.4               0.2   
1                4.9               3.0                1.4               0.2   
2                4.7               3.2                1.3               0.2   
3                4.6               3.1                1.5               0.2   
4                5.0               3.6                1.4               0.2   

   target species  
0       0  setosa  
1       0  setosa  
2       0  setosa  
3       0  setosa  
4       0  setosa  

Statistik Deskriptif:
       sepal length (cm)  sepal width (cm)  petal length (cm)  \
count         150.000000        150.000000         150.000000   
mean            5.843333          3.057333           3.758000   
std             0.828066          0.435866           1.765298   
min             4.300000          2.000000           1.000000   
25%             5.100000          2.800000           1.600000   
50%             5.800000          3.000000           4.350000   
75%             6.400000          3.300000           5.100000   
max             7.900000          4.400000           6.900000   

       petal width (cm)      target  
count        150.000000  150.000000  
mean           1.199333    1.000000  
std            0.762238    0.819232  
min            0.100000    0.000000  
25%            0.300000    0.000000  
50%            1.300000    1.000000  
75%            1.800000    2.000000  
max            2.500000    2.000000  

c:\Users\derik\anaconda3\Lib\site-packages\seaborn\axisgrid.py:118: UserWarning: The figure layout has changed to tight
  self._figure.tight_layout(*args, **kwargs)
<Figure size 1200x1000 with 0 Axes>

# Preprocessing data
X = iris_df.drop(['species', 'target'], axis=1).values
y = iris_df['target'].values

# One-hot encoding untuk target (karena multi-class)
y_onehot = keras.utils.to_categorical(y)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y_onehot, test_size=0.2, random_state=42)

# Normalisasi fitur
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Implementasi Neural Network untuk data tabular
tabular_model = keras.Sequential([
    # Input layer: 4 fitur (dari dataset Iris), 10 neuron dengan aktivasi ReLU
    keras.layers.Dense(10, activation='relu', input_shape=(X.shape[1],)),
    
    # Hidden layer: 8 neuron dengan aktivasi ReLU
    keras.layers.Dense(8, activation='relu'),
    
    # Output layer: 3 neuron dengan aktivasi softmax untuk klasifikasi multi-class (3 kelas bunga Iris)
    keras.layers.Dense(3, activation='softmax')
])

# Compile model
tabular_model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
tabular_model.summary()

# Training model
history_tabular = tabular_model.fit(
    X_train, y_train,
    epochs=100,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
tabular_loss, tabular_accuracy = tabular_model.evaluate(X_test, y_test)
print(f"\nTabular NN Test Accuracy: {tabular_accuracy:.4f}")
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_4 (Dense)             (None, 10)                50        
                                                                 
 dense_5 (Dense)             (None, 8)                 88        
                                                                 
 dense_6 (Dense)             (None, 3)                 27        
                                                                 
=================================================================
Total params: 165 (660.00 Byte)
Trainable params: 165 (660.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Epoch 1/100
6/6 [==============================] - 0s 18ms/step - loss: 1.1232 - accuracy: 0.2812 - val_loss: 0.9800 - val_accuracy: 0.5000
Epoch 2/100
6/6 [==============================] - 0s 4ms/step - loss: 1.1018 - accuracy: 0.2812 - val_loss: 0.9673 - val_accuracy: 0.5000
Epoch 3/100
6/6 [==============================] - 0s 4ms/step - loss: 1.0817 - accuracy: 0.2812 - val_loss: 0.9539 - val_accuracy: 0.5000
Epoch 4/100
6/6 [==============================] - 0s 4ms/step - loss: 1.0620 - accuracy: 0.2917 - val_loss: 0.9396 - val_accuracy: 0.5000
Epoch 5/100
6/6 [==============================] - 0s 4ms/step - loss: 1.0431 - accuracy: 0.3021 - val_loss: 0.9258 - val_accuracy: 0.5417
Epoch 6/100
6/6 [==============================] - 0s 4ms/step - loss: 1.0232 - accuracy: 0.3333 - val_loss: 0.9124 - val_accuracy: 0.5833
Epoch 7/100
6/6 [==============================] - 0s 4ms/step - loss: 1.0038 - accuracy: 0.4062 - val_loss: 0.8988 - val_accuracy: 0.6250
Epoch 8/100
6/6 [==============================] - 0s 4ms/step - loss: 0.9836 - accuracy: 0.5104 - val_loss: 0.8850 - val_accuracy: 0.7083
Epoch 9/100
6/6 [==============================] - 0s 4ms/step - loss: 0.9637 - accuracy: 0.6250 - val_loss: 0.8705 - val_accuracy: 0.7083
Epoch 10/100
6/6 [==============================] - 0s 4ms/step - loss: 0.9450 - accuracy: 0.6562 - val_loss: 0.8562 - val_accuracy: 0.7083
Epoch 11/100
6/6 [==============================] - 0s 4ms/step - loss: 0.9246 - accuracy: 0.6562 - val_loss: 0.8425 - val_accuracy: 0.7083
Epoch 12/100
6/6 [==============================] - 0s 4ms/step - loss: 0.9055 - accuracy: 0.6562 - val_loss: 0.8276 - val_accuracy: 0.7083
Epoch 13/100
6/6 [==============================] - 0s 4ms/step - loss: 0.8858 - accuracy: 0.6562 - val_loss: 0.8117 - val_accuracy: 0.7083
Epoch 14/100
6/6 [==============================] - 0s 4ms/step - loss: 0.8664 - accuracy: 0.6667 - val_loss: 0.7963 - val_accuracy: 0.7083
Epoch 15/100
6/6 [==============================] - 0s 4ms/step - loss: 0.8465 - accuracy: 0.6667 - val_loss: 0.7812 - val_accuracy: 0.7083
Epoch 16/100
6/6 [==============================] - 0s 4ms/step - loss: 0.8268 - accuracy: 0.6667 - val_loss: 0.7668 - val_accuracy: 0.7083
Epoch 17/100
6/6 [==============================] - 0s 4ms/step - loss: 0.8071 - accuracy: 0.6667 - val_loss: 0.7515 - val_accuracy: 0.7083
Epoch 18/100
6/6 [==============================] - 0s 4ms/step - loss: 0.7885 - accuracy: 0.6667 - val_loss: 0.7364 - val_accuracy: 0.7083
Epoch 19/100
6/6 [==============================] - 0s 4ms/step - loss: 0.7682 - accuracy: 0.6875 - val_loss: 0.7216 - val_accuracy: 0.7083
Epoch 20/100
6/6 [==============================] - 0s 4ms/step - loss: 0.7500 - accuracy: 0.6875 - val_loss: 0.7075 - val_accuracy: 0.7083
Epoch 21/100
6/6 [==============================] - 0s 4ms/step - loss: 0.7315 - accuracy: 0.6875 - val_loss: 0.6929 - val_accuracy: 0.7500
Epoch 22/100
6/6 [==============================] - 0s 5ms/step - loss: 0.7132 - accuracy: 0.6875 - val_loss: 0.6790 - val_accuracy: 0.7500
Epoch 23/100
6/6 [==============================] - 0s 5ms/step - loss: 0.6960 - accuracy: 0.6979 - val_loss: 0.6650 - val_accuracy: 0.7500
Epoch 24/100
6/6 [==============================] - 0s 4ms/step - loss: 0.6779 - accuracy: 0.7083 - val_loss: 0.6510 - val_accuracy: 0.7500
Epoch 25/100
6/6 [==============================] - 0s 4ms/step - loss: 0.6601 - accuracy: 0.7188 - val_loss: 0.6383 - val_accuracy: 0.7500
Epoch 26/100
6/6 [==============================] - 0s 4ms/step - loss: 0.6439 - accuracy: 0.7188 - val_loss: 0.6246 - val_accuracy: 0.7500
Epoch 27/100
6/6 [==============================] - 0s 4ms/step - loss: 0.6274 - accuracy: 0.7292 - val_loss: 0.6120 - val_accuracy: 0.7917
Epoch 28/100
6/6 [==============================] - 0s 4ms/step - loss: 0.6119 - accuracy: 0.7396 - val_loss: 0.5988 - val_accuracy: 0.7917
Epoch 29/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5959 - accuracy: 0.7396 - val_loss: 0.5854 - val_accuracy: 0.7917
Epoch 30/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5798 - accuracy: 0.7500 - val_loss: 0.5718 - val_accuracy: 0.7917
Epoch 31/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5640 - accuracy: 0.7500 - val_loss: 0.5578 - val_accuracy: 0.7917
Epoch 32/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5487 - accuracy: 0.7604 - val_loss: 0.5439 - val_accuracy: 0.7917
Epoch 33/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5335 - accuracy: 0.7812 - val_loss: 0.5306 - val_accuracy: 0.7917
Epoch 34/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5193 - accuracy: 0.7812 - val_loss: 0.5187 - val_accuracy: 0.7917
Epoch 35/100
6/6 [==============================] - 0s 4ms/step - loss: 0.5063 - accuracy: 0.7917 - val_loss: 0.5087 - val_accuracy: 0.7917
Epoch 36/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4939 - accuracy: 0.7917 - val_loss: 0.4991 - val_accuracy: 0.7917
Epoch 37/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4824 - accuracy: 0.7917 - val_loss: 0.4900 - val_accuracy: 0.7917
Epoch 38/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4719 - accuracy: 0.8021 - val_loss: 0.4806 - val_accuracy: 0.7917
Epoch 39/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4613 - accuracy: 0.8229 - val_loss: 0.4725 - val_accuracy: 0.8333
Epoch 40/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4517 - accuracy: 0.8229 - val_loss: 0.4658 - val_accuracy: 0.7917
Epoch 41/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4420 - accuracy: 0.8229 - val_loss: 0.4582 - val_accuracy: 0.7917
Epoch 42/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4332 - accuracy: 0.8229 - val_loss: 0.4503 - val_accuracy: 0.7917
Epoch 43/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4249 - accuracy: 0.8229 - val_loss: 0.4429 - val_accuracy: 0.8333
Epoch 44/100
6/6 [==============================] - 0s 4ms/step - loss: 0.4167 - accuracy: 0.8333 - val_loss: 0.4371 - val_accuracy: 0.8333
Epoch 45/100
6/6 [==============================] - 0s 5ms/step - loss: 0.4089 - accuracy: 0.8438 - val_loss: 0.4298 - val_accuracy: 0.8333
Epoch 46/100
6/6 [==============================] - 0s 5ms/step - loss: 0.4011 - accuracy: 0.8438 - val_loss: 0.4243 - val_accuracy: 0.8333
Epoch 47/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3937 - accuracy: 0.8542 - val_loss: 0.4189 - val_accuracy: 0.8333
Epoch 48/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3863 - accuracy: 0.8542 - val_loss: 0.4135 - val_accuracy: 0.8333
Epoch 49/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3799 - accuracy: 0.8542 - val_loss: 0.4084 - val_accuracy: 0.8750
Epoch 50/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3728 - accuracy: 0.8646 - val_loss: 0.4017 - val_accuracy: 0.8750
Epoch 51/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3668 - accuracy: 0.8750 - val_loss: 0.3964 - val_accuracy: 0.8750
Epoch 52/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3608 - accuracy: 0.8750 - val_loss: 0.3909 - val_accuracy: 0.8750
Epoch 53/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3550 - accuracy: 0.8750 - val_loss: 0.3856 - val_accuracy: 0.8750
Epoch 54/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3497 - accuracy: 0.8750 - val_loss: 0.3809 - val_accuracy: 0.8750
Epoch 55/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3442 - accuracy: 0.8750 - val_loss: 0.3751 - val_accuracy: 0.8750
Epoch 56/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3391 - accuracy: 0.8750 - val_loss: 0.3711 - val_accuracy: 0.9167
Epoch 57/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3342 - accuracy: 0.8854 - val_loss: 0.3668 - val_accuracy: 0.9167
Epoch 58/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3296 - accuracy: 0.8854 - val_loss: 0.3642 - val_accuracy: 0.9167
Epoch 59/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3246 - accuracy: 0.9062 - val_loss: 0.3608 - val_accuracy: 0.9167
Epoch 60/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3201 - accuracy: 0.9062 - val_loss: 0.3569 - val_accuracy: 0.9167
Epoch 61/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3153 - accuracy: 0.9062 - val_loss: 0.3518 - val_accuracy: 0.9167
Epoch 62/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3109 - accuracy: 0.9062 - val_loss: 0.3465 - val_accuracy: 0.9167
Epoch 63/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3068 - accuracy: 0.9062 - val_loss: 0.3411 - val_accuracy: 0.9167
Epoch 64/100
6/6 [==============================] - 0s 4ms/step - loss: 0.3035 - accuracy: 0.9062 - val_loss: 0.3389 - val_accuracy: 0.9167
Epoch 65/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2991 - accuracy: 0.9271 - val_loss: 0.3356 - val_accuracy: 0.9167
Epoch 66/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2947 - accuracy: 0.9271 - val_loss: 0.3312 - val_accuracy: 0.9167
Epoch 67/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2908 - accuracy: 0.9271 - val_loss: 0.3256 - val_accuracy: 0.9167
Epoch 68/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2871 - accuracy: 0.9271 - val_loss: 0.3217 - val_accuracy: 0.9167
Epoch 69/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2833 - accuracy: 0.9271 - val_loss: 0.3178 - val_accuracy: 0.9167
Epoch 70/100
6/6 [==============================] - 0s 5ms/step - loss: 0.2799 - accuracy: 0.9271 - val_loss: 0.3132 - val_accuracy: 0.9167
Epoch 71/100
6/6 [==============================] - 0s 5ms/step - loss: 0.2764 - accuracy: 0.9167 - val_loss: 0.3106 - val_accuracy: 0.9167
Epoch 72/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2726 - accuracy: 0.9167 - val_loss: 0.3074 - val_accuracy: 0.9167
Epoch 73/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2694 - accuracy: 0.9167 - val_loss: 0.3029 - val_accuracy: 0.9167
Epoch 74/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2663 - accuracy: 0.9167 - val_loss: 0.2996 - val_accuracy: 0.9167
Epoch 75/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2631 - accuracy: 0.9167 - val_loss: 0.2981 - val_accuracy: 0.9167
Epoch 76/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2595 - accuracy: 0.9167 - val_loss: 0.2928 - val_accuracy: 0.9167
Epoch 77/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2563 - accuracy: 0.9167 - val_loss: 0.2899 - val_accuracy: 0.9167
Epoch 78/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2531 - accuracy: 0.9167 - val_loss: 0.2866 - val_accuracy: 0.9167
Epoch 79/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2505 - accuracy: 0.9167 - val_loss: 0.2809 - val_accuracy: 0.9167
Epoch 80/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2469 - accuracy: 0.9167 - val_loss: 0.2778 - val_accuracy: 0.9167
Epoch 81/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2440 - accuracy: 0.9271 - val_loss: 0.2754 - val_accuracy: 0.9167
Epoch 82/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2418 - accuracy: 0.9271 - val_loss: 0.2750 - val_accuracy: 0.9167
Epoch 83/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2383 - accuracy: 0.9375 - val_loss: 0.2717 - val_accuracy: 0.9167
Epoch 84/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2351 - accuracy: 0.9375 - val_loss: 0.2673 - val_accuracy: 0.9167
Epoch 85/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2323 - accuracy: 0.9375 - val_loss: 0.2622 - val_accuracy: 0.9167
Epoch 86/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2301 - accuracy: 0.9375 - val_loss: 0.2606 - val_accuracy: 0.9167
Epoch 87/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2268 - accuracy: 0.9479 - val_loss: 0.2568 - val_accuracy: 0.9167
Epoch 88/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2246 - accuracy: 0.9479 - val_loss: 0.2545 - val_accuracy: 0.9167
Epoch 89/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2216 - accuracy: 0.9479 - val_loss: 0.2491 - val_accuracy: 0.9167
Epoch 90/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2188 - accuracy: 0.9479 - val_loss: 0.2456 - val_accuracy: 0.9167
Epoch 91/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2164 - accuracy: 0.9479 - val_loss: 0.2413 - val_accuracy: 0.9167
Epoch 92/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2140 - accuracy: 0.9479 - val_loss: 0.2392 - val_accuracy: 0.9167
Epoch 93/100
6/6 [==============================] - 0s 5ms/step - loss: 0.2114 - accuracy: 0.9479 - val_loss: 0.2371 - val_accuracy: 0.9167
Epoch 94/100
6/6 [==============================] - 0s 6ms/step - loss: 0.2090 - accuracy: 0.9479 - val_loss: 0.2346 - val_accuracy: 0.9167
Epoch 95/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2065 - accuracy: 0.9479 - val_loss: 0.2310 - val_accuracy: 0.9167
Epoch 96/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2038 - accuracy: 0.9479 - val_loss: 0.2277 - val_accuracy: 0.9167
Epoch 97/100
6/6 [==============================] - 0s 4ms/step - loss: 0.2015 - accuracy: 0.9375 - val_loss: 0.2248 - val_accuracy: 0.9167
Epoch 98/100
6/6 [==============================] - 0s 4ms/step - loss: 0.1992 - accuracy: 0.9375 - val_loss: 0.2217 - val_accuracy: 0.9167
Epoch 99/100
6/6 [==============================] - 0s 4ms/step - loss: 0.1970 - accuracy: 0.9375 - val_loss: 0.2184 - val_accuracy: 0.9167
Epoch 100/100
6/6 [==============================] - 0s 4ms/step - loss: 0.1946 - accuracy: 0.9375 - val_loss: 0.2153 - val_accuracy: 0.9167
1/1 [==============================] - 0s 13ms/step - loss: 0.1551 - accuracy: 1.0000

Tabular NN Test Accuracy: 1.0000
# Visualisasi hasil training
plt.figure(figsize=(12, 5))

# Plot history loss
plt.subplot(1, 2, 1)
plt.plot(history_tabular.history['loss'], label='Training Loss')
plt.plot(history_tabular.history['val_loss'], label='Validation Loss')
plt.title('Tabular NN Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)

# Plot history accuracy
plt.subplot(1, 2, 2)
plt.plot(history_tabular.history['accuracy'], label='Training Accuracy')
plt.plot(history_tabular.history['val_accuracy'], label='Validation Accuracy')
plt.title('Tabular NN Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid(True)

plt.tight_layout()
plt.show()

# Prediksi dengan model
y_pred_prob = tabular_model.predict(X_test)
y_pred = np.argmax(y_pred_prob, axis=1)
y_true = np.argmax(y_test, axis=1)

# Evaluasi hasil
print("Classification Report:")
print(classification_report(y_true, y_pred, target_names=iris.target_names))

# Confusion Matrix
plt.figure(figsize=(8, 6))
cm = confusion_matrix(y_true, y_pred)
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
            xticklabels=iris.target_names,
            yticklabels=iris.target_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()
1/1 [==============================] - 0s 28ms/step
Classification Report:
              precision    recall  f1-score   support

      setosa       1.00      1.00      1.00        10
  versicolor       1.00      1.00      1.00         9
   virginica       1.00      1.00      1.00        11

    accuracy                           1.00        30
   macro avg       1.00      1.00      1.00        30
weighted avg       1.00      1.00      1.00        30

6. Eksperimen dengan Hyperparameter (Jumlah Layer & Neuron)

Mari kita coba variasi arsitektur neural network untuk melihat pengaruhnya pada performa model.

# Fungsi untuk membuat dan melatih model dengan arsitektur yang berbeda
def create_train_evaluate_model(hidden_layers, X_train, y_train, X_test, y_test):
    model = keras.Sequential()
    
    # Input layer
    model.add(keras.layers.Dense(hidden_layers[0], activation='relu', input_shape=(X_train.shape[1],)))
    
    # Hidden layers
    for units in hidden_layers[1:]:
        model.add(keras.layers.Dense(units, activation='relu'))
    
    # Output layer (3 neuron untuk 3 kelas bunga Iris)
    model.add(keras.layers.Dense(3, activation='softmax'))
    
    # Compile model
    model.compile(
        optimizer='adam',
        loss='categorical_crossentropy',
        metrics=['accuracy']
    )
    
    # Train model
    history = model.fit(
        X_train, y_train,
        epochs=50,
        batch_size=16,
        validation_split=0.2,
        verbose=0
    )
    
    # Evaluate model
    _, accuracy = model.evaluate(X_test, y_test, verbose=0)
    
    return model, history, accuracy

# Definisi arsitektur yang berbeda untuk diuji
architectures = [
    [4],                  # 1 hidden layer with 4 neurons
    [8],                  # 1 hidden layer with 8 neurons
    [4, 4],               # 2 hidden layers with 4 neurons each
    [8, 4],               # 2 hidden layers with 8 and 4 neurons
    [16, 8, 4]            # 3 hidden layers with 16, 8, and 4 neurons
]

# Evaluasi setiap arsitektur
results = []
histories = []

for i, architecture in enumerate(architectures):
    print(f"Training model with architecture: {architecture}")
    model, history, accuracy = create_train_evaluate_model(
        architecture, X_train, y_train, X_test, y_test
    )
    results.append({
        'architecture': architecture,
        'accuracy': accuracy
    })
    histories.append(history)
    print(f"Test accuracy: {accuracy:.4f}\n")

# Tampilkan hasil dalam bentuk tabel
results_df = pd.DataFrame(results)
results_df['architecture'] = results_df['architecture'].apply(lambda x: str(x))
print("Comparison of Different Architectures:")
print(results_df.sort_values('accuracy', ascending=False))
Training model with architecture: [4]
Test accuracy: 0.8000

Training model with architecture: [8]
Test accuracy: 0.9000

Training model with architecture: [4, 4]
Test accuracy: 0.8333

Training model with architecture: [8, 4]
Test accuracy: 0.6333

Training model with architecture: [16, 8, 4]
Test accuracy: 0.9667

Comparison of Different Architectures:
  architecture  accuracy
4   [16, 8, 4]  0.966667
1          [8]  0.900000
2       [4, 4]  0.833333
0          [4]  0.800000
3       [8, 4]  0.633333
# Visualisasi perbandingan akurasi
plt.figure(figsize=(10, 6))
plt.bar(
    results_df['architecture'],
    results_df['accuracy'],
    color='skyblue'
)
plt.title('Accuracy of Different Neural Network Architectures')
plt.xlabel('Architecture (Hidden Layers)')
plt.ylabel('Test Accuracy')
plt.ylim(0.8, 1.0)
plt.grid(axis='y', linestyle='--', alpha=0.7)
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()

7. Kesimpulan

Dari percobaan yang telah kita lakukan, kita dapat menyimpulkan beberapa hal:

  1. Perceptron (Single Neuron): Sangat baik untuk masalah klasifikasi linear, tetapi tidak dapat menangani data yang tidak linear seperti pola lingkaran.

  2. Multi-layer Neural Network: Mampu mempelajari pola non-linear dengan baik melalui struktur hidden layer yang kompleks.

  3. Neural Network untuk Data Tabular: Dapat diterapkan dengan efektif pada dataset tabular seperti Iris setelah dilakukan preprocessing yang tepat.

  4. Pengaruh Arsitektur: Jumlah layer dan neuron dapat mempengaruhi performa model. Arsitektur yang lebih kompleks tidak selalu menghasilkan performa yang lebih baik dan bisa mengarah ke overfitting.

Neural network sangat fleksibel dan dapat digunakan untuk berbagai jenis data. Preprocessing data dan pemilihan arsitektur yang tepat sangat penting untuk mencapai performa yang optimal.

# Split data untuk training dan testing
X_train, X_test, y_train, y_test = train_test_split(X_linear, y_linear, test_size=0.2, random_state=42)

# Implementasi Perceptron dengan TensorFlow/Keras
perceptron_model = keras.Sequential([
    # Input layer + Output layer (perceptron hanya memiliki 1 layer)
    # Input shape: 2 fitur, Output: 1 neuron dengan aktivasi sigmoid
    keras.layers.Dense(1, activation='sigmoid', input_shape=(2,))
])

# Compile model
perceptron_model.compile(
    optimizer='sgd',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
perceptron_model.summary()

# Training model
history_perceptron = perceptron_model.fit(
    X_train, y_train,
    epochs=50,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
perceptron_loss, perceptron_accuracy = perceptron_model.evaluate(X_test, y_test)
print(f"\nPerceptron Test Accuracy: {perceptron_accuracy:.4f}")
# Split data untuk training dan testing
X_train, X_test, y_train, y_test = train_test_split(X_circular, y_circular, test_size=0.2, random_state=42)

# Implementasi Multi-layer NN dengan TensorFlow/Keras
multilayer_model = keras.Sequential([
    # Input layer: 2 fitur, 8 neuron dengan aktivasi ReLU
    keras.layers.Dense(8, activation='relu', input_shape=(2,)),
    
    # Hidden layer: 4 neuron dengan aktivasi ReLU
    keras.layers.Dense(4, activation='relu'),
    
    # Output layer: 1 neuron dengan aktivasi sigmoid untuk klasifikasi biner
    keras.layers.Dense(1, activation='sigmoid')
])

# Compile model
multilayer_model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
multilayer_model.summary()

# Training model
history_multilayer = multilayer_model.fit(
    X_train, y_train,
    epochs=100,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
multilayer_loss, multilayer_accuracy = multilayer_model.evaluate(X_test, y_test)
print(f"\nMulti-layer NN Test Accuracy: {multilayer_accuracy:.4f}")
# Preprocessing data
X = iris_df.drop(['species', 'target'], axis=1).values
y = iris_df['target'].values

# One-hot encoding untuk target (karena multi-class)
y_onehot = keras.utils.to_categorical(y)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y_onehot, test_size=0.2, random_state=42)

# Normalisasi fitur
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Implementasi Neural Network untuk data tabular
tabular_model = keras.Sequential([
    # Input layer: 4 fitur (dari dataset Iris), 10 neuron dengan aktivasi ReLU
    keras.layers.Dense(10, activation='relu', input_shape=(X.shape[1],)),
    
    # Hidden layer: 8 neuron dengan aktivasi ReLU
    keras.layers.Dense(8, activation='relu'),
    
    # Output layer: 3 neuron dengan aktivasi softmax untuk klasifikasi multi-class (3 kelas bunga Iris)
    keras.layers.Dense(3, activation='softmax')
])

# Compile model
tabular_model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Ringkasan model
tabular_model.summary()

# Training model
history_tabular = tabular_model.fit(
    X_train, y_train,
    epochs=100,
    batch_size=16,
    validation_split=0.2,
    verbose=1
)

# Evaluasi model
tabular_loss, tabular_accuracy = tabular_model.evaluate(X_test, y_test)
print(f"\nTabular NN Test Accuracy: {tabular_accuracy:.4f}")
# Fungsi untuk membuat dan melatih model dengan arsitektur yang berbeda
def create_train_evaluate_model(hidden_layers, X_train, y_train, X_test, y_test):
    model = keras.Sequential()
    
    # Input layer
    model.add(keras.layers.Dense(hidden_layers[0], activation='relu', input_shape=(X_train.shape[1],)))
    
    # Hidden layers
    for units in hidden_layers[1:]:
        model.add(keras.layers.Dense(units, activation='relu'))
    
    # Output layer (3 neuron untuk 3 kelas bunga Iris)
    model.add(keras.layers.Dense(3, activation='softmax'))
    
    # Compile model
    model.compile(
        optimizer='adam',
        loss='categorical_crossentropy',
        metrics=['accuracy']
    )
    
    # Train model
    history = model.fit(
        X_train, y_train,
        epochs=50,
        batch_size=16,
        validation_split=0.2,
        verbose=0
    )
    
    # Evaluate model
    _, accuracy = model.evaluate(X_test, y_test, verbose=0)
    
    return model, history, accuracy

# Definisi arsitektur yang berbeda untuk diuji
architectures = [
    [4],                  # 1 hidden layer with 4 neurons
    [8],                  # 1 hidden layer with 8 neurons
    [4, 4],               # 2 hidden layers with 4 neurons each
    [8, 4],               # 2 hidden layers with 8 and 4 neurons
    [16, 8, 4]            # 3 hidden layers with 16, 8, and 4 neurons
]

# Evaluasi setiap arsitektur
results = []
histories = []

for i, architecture in enumerate(architectures):
    print(f"Training model with architecture: {architecture}")
    model, history, accuracy = create_train_evaluate_model(
        architecture, X_train, y_train, X_test, y_test
    )
    results.append({
        'architecture': architecture,
        'accuracy': accuracy
    })
    histories.append(history)
    print(f"Test accuracy: {accuracy:.4f}\n")

# Tampilkan hasil dalam bentuk tabel
results_df = pd.DataFrame(results)
results_df['architecture'] = results_df['architecture'].apply(lambda x: str(x))
print("Comparison of Different Architectures:")
print(results_df.sort_values('accuracy', ascending=False))

Penjelasan Layer dalam Neural Network

Pada implementasi neural network, terdapat beberapa jenis layer yang penting untuk dipahami:

  1. Input Layer
    • Merupakan layer pertama yang menerima data input
    • Jumlah neuron sesuai dengan jumlah fitur dalam dataset
    • Pada Keras, input layer didefinisikan melalui parameter input_shape
  2. Hidden Layer
    • Layer yang berada di antara input dan output layer
    • Jumlah hidden layer dan neuron dapat bervariasi
    • Berperan dalam ekstraksi pola dan fitur kompleks dari data
    • Biasanya menggunakan fungsi aktivasi non-linear seperti ReLU
  3. Output Layer
    • Layer terakhir yang menghasilkan output/prediksi model
    • Jumlah neuron sesuai dengan target prediksi:
      • 1 neuron + aktivasi sigmoid: klasifikasi biner
      • n neuron + aktivasi softmax: klasifikasi multi-class dengan n kelas

Pilihan arsitektur (jumlah layer dan neuron) sangat mempengaruhi kemampuan model dalam menangkap kompleksitas data.