Neural Network for Tabular Data using TensorFlow

Author

Deri Siswara

Dalam notebook ini, kita akan mengimplementasikan neural network untuk data tabular menggunakan TensorFlow. Contoh ini menunjukkan langkah-langkah lengkap dari persiapan data hingga evaluasi model.

1. Import Library yang Diperlukan

Pertama, kita perlu mengimpor library-library yang akan digunakan dalam proyek ini.

# Import library yang diperlukan
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

# Memeriksa versi TensorFlow
print(f"TensorFlow version: {tf.__version__}")

# Set random seed untuk reproducibility
np.random.seed(42)
tf.random.set_seed(42)
WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

TensorFlow version: 2.15.0

2. Memuat dan Mengeksplorasi Dataset

Untuk contoh ini, kita akan menggunakan dataset California Housing dari scikit-learn, yang merupakan contoh data tabular untuk masalah regresi.

# Memuat dataset California Housing
from sklearn.datasets import fetch_california_housing

housing = fetch_california_housing()
X = pd.DataFrame(housing.data, columns=housing.feature_names)
y = housing.target

# Melihat beberapa data awal
print("Shape of features:", X.shape)
print("Shape of target:", y.shape)
print("\nFeature names:", housing.feature_names)
print("\nSample data:")
display(X.head())

# Statistik deskriptif
print("\nStatistik deskriptif:")
display(X.describe())

# Histogram target variable
plt.figure(figsize=(10, 6))
plt.hist(y, bins=30)
plt.title('Distribusi Harga Rumah')
plt.xlabel('Nilai Median Rumah (dalam $100,000)')
plt.ylabel('Frekuensi')
plt.show()
Shape of features: (20640, 8)
Shape of target: (20640,)

Feature names: ['MedInc', 'HouseAge', 'AveRooms', 'AveBedrms', 'Population', 'AveOccup', 'Latitude', 'Longitude']

Sample data:
MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude Longitude
0 8.3252 41.0 6.984127 1.023810 322.0 2.555556 37.88 -122.23
1 8.3014 21.0 6.238137 0.971880 2401.0 2.109842 37.86 -122.22
2 7.2574 52.0 8.288136 1.073446 496.0 2.802260 37.85 -122.24
3 5.6431 52.0 5.817352 1.073059 558.0 2.547945 37.85 -122.25
4 3.8462 52.0 6.281853 1.081081 565.0 2.181467 37.85 -122.25

Statistik deskriptif:
MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude Longitude
count 20640.000000 20640.000000 20640.000000 20640.000000 20640.000000 20640.000000 20640.000000 20640.000000
mean 3.870671 28.639486 5.429000 1.096675 1425.476744 3.070655 35.631861 -119.569704
std 1.899822 12.585558 2.474173 0.473911 1132.462122 10.386050 2.135952 2.003532
min 0.499900 1.000000 0.846154 0.333333 3.000000 0.692308 32.540000 -124.350000
25% 2.563400 18.000000 4.440716 1.006079 787.000000 2.429741 33.930000 -121.800000
50% 3.534800 29.000000 5.229129 1.048780 1166.000000 2.818116 34.260000 -118.490000
75% 4.743250 37.000000 6.052381 1.099526 1725.000000 3.282261 37.710000 -118.010000
max 15.000100 52.000000 141.909091 34.066667 35682.000000 1243.333333 41.950000 -114.310000

3. Analisis Korelasi

Mari kita analisis korelasi antara fitur dan target.

# Analisis korelasi
data_corr = pd.DataFrame(X.copy())
data_corr['PRICE'] = y

plt.figure(figsize=(12, 10))
correlation_matrix = data_corr.corr()
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', fmt='.2f')
plt.title('Matriks Korelasi')
plt.tight_layout()
plt.show()

# Plot scatter untuk beberapa fitur terhadap harga
fig, axes = plt.subplots(2, 4, figsize=(20, 10))
axes = axes.flatten()

for i, feature in enumerate(housing.feature_names):
    axes[i].scatter(X[feature], y, alpha=0.5)
    axes[i].set_xlabel(feature)
    axes[i].set_ylabel('Price')
    axes[i].set_title(f'{feature} vs Price')

plt.tight_layout()
plt.show()

4. Pra-pemrosesan Data

Sebelum melatih model, kita perlu memisahkan data menjadi set pelatihan dan pengujian, serta melakukan normalisasi fitur.

# Memisahkan data menjadi set pelatihan dan pengujian
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Membuat pipeline untuk pra-pemrosesan numerik
numeric_features = X.columns.tolist()
numeric_transformer = Pipeline(steps=[
    ('imputer', SimpleImputer(strategy='median')),
    ('scaler', StandardScaler())
])

# Membuat preprocessor dengan ColumnTransformer
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numeric_features)
    ])

# Menerapkan preprocessing
X_train_processed = preprocessor.fit_transform(X_train)
X_test_processed = preprocessor.transform(X_test)

print("Shape setelah preprocessing:")
print("X_train:", X_train_processed.shape)
print("X_test:", X_test_processed.shape)
Shape setelah preprocessing:
X_train: (16512, 8)
X_test: (4128, 8)

5. Membangun Model Neural Network

Sekarang, kita akan membangun model neural network menggunakan TensorFlow dan Keras.

# Fungsi untuk membuat model neural network
def build_model(input_shape, layers=[64, 32], activation='relu', 
                learning_rate=0.001, dropout_rate=0.2):
    model = tf.keras.Sequential()
    
    # Input layer
    model.add(tf.keras.layers.InputLayer(input_shape=input_shape))
    
    # Hidden layers
    for units in layers:
        model.add(tf.keras.layers.Dense(units, activation=activation))
        model.add(tf.keras.layers.BatchNormalization())
        model.add(tf.keras.layers.Dropout(dropout_rate))
    
    # Output layer (untuk regresi)
    model.add(tf.keras.layers.Dense(1))
    
    # Kompilasi model
    model.compile(
        optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),
        loss='mse',
        metrics=['mae']
    )
    
    return model

# Membangun model
input_shape = (X_train_processed.shape[1],)
model = build_model(input_shape, layers=[128, 64, 32], dropout_rate=0.3)

# Menampilkan ringkasan arsitektur model
model.summary()
WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\backend.py:873: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 128)               1152      
                                                                 
 batch_normalization (Batch  (None, 128)               512       
 Normalization)                                                  
                                                                 
 dropout (Dropout)           (None, 128)               0         
                                                                 
 dense_1 (Dense)             (None, 64)                8256      
                                                                 
 batch_normalization_1 (Bat  (None, 64)                256       
 chNormalization)                                                
                                                                 
 dropout_1 (Dropout)         (None, 64)                0         
                                                                 
 dense_2 (Dense)             (None, 32)                2080      
                                                                 
 batch_normalization_2 (Bat  (None, 32)                128       
 chNormalization)                                                
                                                                 
 dropout_2 (Dropout)         (None, 32)                0         
                                                                 
 dense_3 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 12417 (48.50 KB)
Trainable params: 11969 (46.75 KB)
Non-trainable params: 448 (1.75 KB)
_________________________________________________________________

6. Melatih Model

Selanjutnya, kita akan melatih model dengan data yang telah diproses.

# Mengonfigurasi early stopping untuk mencegah overfitting
early_stopping = tf.keras.callbacks.EarlyStopping(
    monitor='val_loss',
    patience=10,
    restore_best_weights=True
)

# Mengonfigurasi model checkpoint untuk menyimpan model terbaik
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
    'best_model.h5',
    monitor='val_loss',
    save_best_only=True,
    verbose=1
)

# Proses pelatihan model
history = model.fit(
    X_train_processed,
    y_train,
    epochs=100,
    batch_size=32,
    validation_split=0.2,
    callbacks=[early_stopping, model_checkpoint],
    verbose=1
)
Epoch 1/100
WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\utils\tf_utils.py:492: The name tf.ragged.RaggedTensorValue is deprecated. Please use tf.compat.v1.ragged.RaggedTensorValue instead.

WARNING:tensorflow:From c:\Users\derik\anaconda3\Lib\site-packages\keras\src\engine\base_layer_utils.py:384: The name tf.executing_eagerly_outside_functions is deprecated. Please use tf.compat.v1.executing_eagerly_outside_functions instead.

403/413 [============================>.] - ETA: 0s - loss: 2.9026 - mae: 1.3022
Epoch 1: val_loss improved from inf to 0.70579, saving model to best_model.h5
413/413 [==============================] - 2s 2ms/step - loss: 2.8669 - mae: 1.2926 - val_loss: 0.7058 - val_mae: 0.6160
Epoch 2/100
 49/413 [==>...........................] - ETA: 0s - loss: 1.3317 - mae: 0.8827
c:\Users\derik\anaconda3\Lib\site-packages\keras\src\engine\training.py:3103: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
408/413 [============================>.] - ETA: 0s - loss: 1.0382 - mae: 0.7807
Epoch 2: val_loss improved from 0.70579 to 0.55656, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 1.0383 - mae: 0.7809 - val_loss: 0.5566 - val_mae: 0.5128
Epoch 3/100
393/413 [===========================>..] - ETA: 0s - loss: 0.7746 - mae: 0.6666
Epoch 3: val_loss did not improve from 0.55656
413/413 [==============================] - 1s 1ms/step - loss: 0.7683 - mae: 0.6638 - val_loss: 0.5628 - val_mae: 0.5423
Epoch 4/100
373/413 [==========================>...] - ETA: 0s - loss: 0.6456 - mae: 0.6002
Epoch 4: val_loss improved from 0.55656 to 0.49242, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.6423 - mae: 0.5993 - val_loss: 0.4924 - val_mae: 0.4927
Epoch 5/100
411/413 [============================>.] - ETA: 0s - loss: 0.5795 - mae: 0.5675
Epoch 5: val_loss did not improve from 0.49242
413/413 [==============================] - 1s 1ms/step - loss: 0.5788 - mae: 0.5673 - val_loss: 0.5003 - val_mae: 0.4864
Epoch 6/100
371/413 [=========================>....] - ETA: 0s - loss: 0.5530 - mae: 0.5482
Epoch 6: val_loss did not improve from 0.49242
413/413 [==============================] - 1s 1ms/step - loss: 0.5507 - mae: 0.5472 - val_loss: 0.5147 - val_mae: 0.4977
Epoch 7/100
381/413 [==========================>...] - ETA: 0s - loss: 0.5179 - mae: 0.5296
Epoch 7: val_loss did not improve from 0.49242
413/413 [==============================] - 1s 1ms/step - loss: 0.5150 - mae: 0.5283 - val_loss: 0.5010 - val_mae: 0.4964
Epoch 8/100
381/413 [==========================>...] - ETA: 0s - loss: 0.4973 - mae: 0.5185
Epoch 8: val_loss improved from 0.49242 to 0.46528, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.4942 - mae: 0.5165 - val_loss: 0.4653 - val_mae: 0.4829
Epoch 9/100
398/413 [===========================>..] - ETA: 0s - loss: 0.4856 - mae: 0.5083
Epoch 9: val_loss improved from 0.46528 to 0.46398, saving model to best_model.h5
413/413 [==============================] - 1s 2ms/step - loss: 0.4830 - mae: 0.5073 - val_loss: 0.4640 - val_mae: 0.4813
Epoch 10/100
376/413 [==========================>...] - ETA: 0s - loss: 0.4761 - mae: 0.5047
Epoch 10: val_loss did not improve from 0.46398
413/413 [==============================] - 1s 1ms/step - loss: 0.4745 - mae: 0.5030 - val_loss: 0.5161 - val_mae: 0.5031
Epoch 11/100
406/413 [============================>.] - ETA: 0s - loss: 0.4610 - mae: 0.4944
Epoch 11: val_loss did not improve from 0.46398
413/413 [==============================] - 1s 1ms/step - loss: 0.4618 - mae: 0.4944 - val_loss: 0.4890 - val_mae: 0.5015
Epoch 12/100
372/413 [==========================>...] - ETA: 0s - loss: 0.4548 - mae: 0.4912
Epoch 12: val_loss improved from 0.46398 to 0.43874, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.4529 - mae: 0.4905 - val_loss: 0.4387 - val_mae: 0.4572
Epoch 13/100
378/413 [==========================>...] - ETA: 0s - loss: 0.4493 - mae: 0.4863
Epoch 13: val_loss did not improve from 0.43874
413/413 [==============================] - 1s 1ms/step - loss: 0.4487 - mae: 0.4863 - val_loss: 0.4764 - val_mae: 0.4889
Epoch 14/100
371/413 [=========================>....] - ETA: 0s - loss: 0.4552 - mae: 0.4895
Epoch 14: val_loss did not improve from 0.43874
413/413 [==============================] - 1s 1ms/step - loss: 0.4503 - mae: 0.4867 - val_loss: 0.4656 - val_mae: 0.4800
Epoch 15/100
388/413 [===========================>..] - ETA: 0s - loss: 0.4410 - mae: 0.4837
Epoch 15: val_loss did not improve from 0.43874
413/413 [==============================] - 0s 1ms/step - loss: 0.4420 - mae: 0.4826 - val_loss: 0.4706 - val_mae: 0.4875
Epoch 16/100
386/413 [===========================>..] - ETA: 0s - loss: 0.4348 - mae: 0.4772
Epoch 16: val_loss did not improve from 0.43874
413/413 [==============================] - 1s 1ms/step - loss: 0.4312 - mae: 0.4757 - val_loss: 0.4514 - val_mae: 0.4566
Epoch 17/100
380/413 [==========================>...] - ETA: 0s - loss: 0.4356 - mae: 0.4777
Epoch 17: val_loss improved from 0.43874 to 0.41818, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.4347 - mae: 0.4764 - val_loss: 0.4182 - val_mae: 0.4475
Epoch 18/100
406/413 [============================>.] - ETA: 0s - loss: 0.4182 - mae: 0.4691
Epoch 18: val_loss did not improve from 0.41818
413/413 [==============================] - 1s 1ms/step - loss: 0.4176 - mae: 0.4691 - val_loss: 0.4394 - val_mae: 0.4596
Epoch 19/100
381/413 [==========================>...] - ETA: 0s - loss: 0.4323 - mae: 0.4755
Epoch 19: val_loss did not improve from 0.41818
413/413 [==============================] - 1s 1ms/step - loss: 0.4292 - mae: 0.4732 - val_loss: 0.4293 - val_mae: 0.4497
Epoch 20/100
375/413 [==========================>...] - ETA: 0s - loss: 0.4227 - mae: 0.4706
Epoch 20: val_loss did not improve from 0.41818
413/413 [==============================] - 1s 1ms/step - loss: 0.4205 - mae: 0.4699 - val_loss: 0.4574 - val_mae: 0.4725
Epoch 21/100
405/413 [============================>.] - ETA: 0s - loss: 0.4056 - mae: 0.4580
Epoch 21: val_loss improved from 0.41818 to 0.40052, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.4066 - mae: 0.4587 - val_loss: 0.4005 - val_mae: 0.4467
Epoch 22/100
381/413 [==========================>...] - ETA: 0s - loss: 0.4034 - mae: 0.4576
Epoch 22: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.4053 - mae: 0.4587 - val_loss: 0.4808 - val_mae: 0.4943
Epoch 23/100
371/413 [=========================>....] - ETA: 0s - loss: 0.4003 - mae: 0.4556
Epoch 23: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.4020 - mae: 0.4569 - val_loss: 0.4203 - val_mae: 0.4633
Epoch 24/100
385/413 [==========================>...] - ETA: 0s - loss: 0.4033 - mae: 0.4563
Epoch 24: val_loss did not improve from 0.40052
413/413 [==============================] - 0s 1ms/step - loss: 0.4027 - mae: 0.4562 - val_loss: 0.4434 - val_mae: 0.4771
Epoch 25/100
370/413 [=========================>....] - ETA: 0s - loss: 0.4119 - mae: 0.4646
Epoch 25: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.4074 - mae: 0.4625 - val_loss: 0.4480 - val_mae: 0.4680
Epoch 26/100
398/413 [===========================>..] - ETA: 0s - loss: 0.4020 - mae: 0.4564
Epoch 26: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.3998 - mae: 0.4550 - val_loss: 0.4232 - val_mae: 0.4590
Epoch 27/100
400/413 [============================>.] - ETA: 0s - loss: 0.3999 - mae: 0.4556
Epoch 27: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.3998 - mae: 0.4557 - val_loss: 0.4136 - val_mae: 0.4539
Epoch 28/100
369/413 [=========================>....] - ETA: 0s - loss: 0.3996 - mae: 0.4574
Epoch 28: val_loss did not improve from 0.40052
413/413 [==============================] - 1s 1ms/step - loss: 0.3985 - mae: 0.4566 - val_loss: 0.4315 - val_mae: 0.4604
Epoch 29/100
412/413 [============================>.] - ETA: 0s - loss: 0.4018 - mae: 0.4593
Epoch 29: val_loss improved from 0.40052 to 0.35726, saving model to best_model.h5
413/413 [==============================] - 1s 1ms/step - loss: 0.4015 - mae: 0.4591 - val_loss: 0.3573 - val_mae: 0.4219
Epoch 30/100
411/413 [============================>.] - ETA: 0s - loss: 0.3938 - mae: 0.4532
Epoch 30: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3937 - mae: 0.4531 - val_loss: 0.4469 - val_mae: 0.4701
Epoch 31/100
375/413 [==========================>...] - ETA: 0s - loss: 0.3889 - mae: 0.4498
Epoch 31: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3846 - mae: 0.4484 - val_loss: 0.4155 - val_mae: 0.4513
Epoch 32/100
371/413 [=========================>....] - ETA: 0s - loss: 0.3805 - mae: 0.4434
Epoch 32: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3801 - mae: 0.4430 - val_loss: 0.3725 - val_mae: 0.4320
Epoch 33/100
406/413 [============================>.] - ETA: 0s - loss: 0.3887 - mae: 0.4502
Epoch 33: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3893 - mae: 0.4505 - val_loss: 0.4813 - val_mae: 0.4975
Epoch 34/100
405/413 [============================>.] - ETA: 0s - loss: 0.3873 - mae: 0.4499
Epoch 34: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3870 - mae: 0.4497 - val_loss: 0.4350 - val_mae: 0.4725
Epoch 35/100
373/413 [==========================>...] - ETA: 0s - loss: 0.3774 - mae: 0.4405
Epoch 35: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3780 - mae: 0.4413 - val_loss: 0.4147 - val_mae: 0.4569
Epoch 36/100
371/413 [=========================>....] - ETA: 0s - loss: 0.3792 - mae: 0.4420
Epoch 36: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3786 - mae: 0.4422 - val_loss: 0.3854 - val_mae: 0.4372
Epoch 37/100
377/413 [==========================>...] - ETA: 0s - loss: 0.3726 - mae: 0.4375
Epoch 37: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3712 - mae: 0.4375 - val_loss: 0.3614 - val_mae: 0.4228
Epoch 38/100
368/413 [=========================>....] - ETA: 0s - loss: 0.3739 - mae: 0.4400
Epoch 38: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3738 - mae: 0.4395 - val_loss: 0.4489 - val_mae: 0.4723
Epoch 39/100
380/413 [==========================>...] - ETA: 0s - loss: 0.3877 - mae: 0.4500
Epoch 39: val_loss did not improve from 0.35726
413/413 [==============================] - 1s 1ms/step - loss: 0.3851 - mae: 0.4487 - val_loss: 0.4560 - val_mae: 0.4812

7. Visualisasi Proses Pelatihan

Mari kita visualisasikan bagaimana loss dan metrik lain berubah selama pelatihan.

# Plot training & validation loss
plt.figure(figsize=(12, 4))

plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(history.history['mae'], label='Training MAE')
plt.plot(history.history['val_mae'], label='Validation MAE')
plt.title('Training and Validation MAE')
plt.xlabel('Epoch')
plt.ylabel('Mean Absolute Error')
plt.legend()

plt.tight_layout()
plt.show()

8. Evaluasi Model

Sekarang kita akan mengevaluasi performa model pada data pengujian.

# Membuat prediksi pada data pengujian
y_pred = model.predict(X_test_processed).flatten()

# Menghitung metrik evaluasi
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)

print(f"Mean Squared Error (MSE): {mse:.4f}")
print(f"Root Mean Squared Error (RMSE): {rmse:.4f}")
print(f"Mean Absolute Error (MAE): {mae:.4f}")
print(f"R² Score: {r2:.4f}")

# Visualisasi prediksi vs nilai sebenarnya
plt.figure(figsize=(10, 6))
plt.scatter(y_test, y_pred, alpha=0.5)
plt.plot([min(y_test), max(y_test)], [min(y_test), max(y_test)], color='red', linestyle='--')
plt.title('Nilai Sebenarnya vs Prediksi')
plt.xlabel('Nilai Sebenarnya')
plt.ylabel('Prediksi')
plt.grid(True)
plt.show()

# Histogram residual
residuals = y_test - y_pred
plt.figure(figsize=(10, 6))
plt.hist(residuals, bins=30)
plt.title('Distribusi Residual')
plt.xlabel('Residual')
plt.ylabel('Frekuensi')
plt.grid(True)
plt.show()

# Plot residual
plt.figure(figsize=(10, 6))
plt.scatter(y_pred, residuals, alpha=0.5)
plt.axhline(y=0, color='red', linestyle='--')
plt.title('Residual Plot')
plt.xlabel('Prediksi')
plt.ylabel('Residual')
plt.grid(True)
plt.show()
129/129 [==============================] - 0s 566us/step
Mean Squared Error (MSE): 0.3413
Root Mean Squared Error (RMSE): 0.5842
Mean Absolute Error (MAE): 0.4139
R² Score: 0.7396

9. Eksperimen dengan Hyperparameter

Kita akan mencoba beberapa variasi hyperparameter untuk melihat apakah kita dapat meningkatkan performa model.

# Fungsi untuk eksperimen dengan hyperparameter
def experiment_model(layers, learning_rate, dropout_rate):
    # Membangun model dengan parameter yang berbeda
    model = build_model(
        input_shape=(X_train_processed.shape[1],),
        layers=layers,
        learning_rate=learning_rate,
        dropout_rate=dropout_rate
    )
    
    # Melatih model
    history = model.fit(
        X_train_processed,
        y_train,
        epochs=50,  # Mengurangi epochs untuk eksperimen
        batch_size=32,
        validation_split=0.2,
        callbacks=[early_stopping],
        verbose=0
    )
    
    # Evaluasi model
    y_pred = model.predict(X_test_processed).flatten()
    rmse = np.sqrt(mean_squared_error(y_test, y_pred))
    r2 = r2_score(y_test, y_pred)
    
    return {
        'layers': layers,
        'learning_rate': learning_rate,
        'dropout_rate': dropout_rate,
        'best_val_loss': min(history.history['val_loss']),
        'rmse': rmse,
        'r2': r2
    }

# Daftar eksperimen
experiments = [
    experiment_model([64], 0.001, 0.2),
    experiment_model([128, 64], 0.001, 0.2),
    experiment_model([256, 128, 64], 0.001, 0.2),
    experiment_model([128, 64], 0.0001, 0.2),
    experiment_model([128, 64], 0.01, 0.2),
    experiment_model([128, 64], 0.001, 0.5),
]

# Menampilkan hasil eksperimen
results_df = pd.DataFrame(experiments)
display(results_df)

# Visualisasi perbandingan RMSE
plt.figure(figsize=(12, 6))
plt.bar(range(len(experiments)), [exp['rmse'] for exp in experiments])
plt.xticks(range(len(experiments)), [f"Exp {i+1}" for i in range(len(experiments))])
plt.title('Perbandingan RMSE antar Eksperimen')
plt.ylabel('RMSE')
plt.grid(axis='y')
plt.show()
129/129 [==============================] - 0s 481us/step
129/129 [==============================] - 0s 481us/step
129/129 [==============================] - 0s 555us/step
129/129 [==============================] - 0s 531us/step
129/129 [==============================] - 0s 755us/step
129/129 [==============================] - 0s 531us/step
layers learning_rate dropout_rate best_val_loss rmse r2
0 [64] 0.0010 0.2 0.374793 0.596135 0.728804
1 [128, 64] 0.0010 0.2 0.351542 0.588526 0.735684
2 [256, 128, 64] 0.0010 0.2 0.285741 0.530324 0.785377
3 [128, 64] 0.0001 0.2 0.357180 0.596403 0.728560
4 [128, 64] 0.0100 0.2 0.380909 0.598692 0.726473
5 [128, 64] 0.0010 0.5 0.349278 0.597678 0.727399

10. Menyimpan dan Memuat Model

Terakhir, kita akan menyimpan model terbaik kita dan menunjukkan cara memuatnya kembali.

# Menyimpan model terbaik
model.save('housing_price_model.h5')
print("Model telah disimpan sebagai 'housing_price_model.h5'")

# Memuat model yang telah disimpan
loaded_model = tf.keras.models.load_model('housing_price_model.h5')
print("Model berhasil dimuat kembali!")

# Verifikasi performa model yang dimuat
y_pred_loaded = loaded_model.predict(X_test_processed).flatten()
rmse_loaded = np.sqrt(mean_squared_error(y_test, y_pred_loaded))
print(f"RMSE dari model yang dimuat: {rmse_loaded:.4f}")
Model telah disimpan sebagai 'housing_price_model.h5'
c:\Users\derik\anaconda3\Lib\site-packages\keras\src\engine\training.py:3103: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
Model berhasil dimuat kembali!
129/129 [==============================] - 0s 574us/step
129/129 [==============================] - 0s 574us/step
RMSE dari model yang dimuat: 0.5842
RMSE dari model yang dimuat: 0.5842

11. Contoh Implementasi: Memprediksi Harga Baru

Mari kita buat sebuah fungsi untuk memprediksi harga berdasarkan fitur-fitur baru.

# Fungsi untuk memprediksi harga rumah baru
def predict_house_price(model, preprocessor, features):
    """
    Memprediksi harga rumah berdasarkan fitur yang diberikan.
    
    Parameters:
    -----------
    model : model Keras
        Model neural network yang telah dilatih
    preprocessor : ColumnTransformer
        Preprocessor data yang telah dilatih
    features : dict
        Dictionary berisi fitur rumah dengan format {nama_fitur: nilai}
    
    Returns:
    --------
    float : Prediksi harga rumah (dalam $100,000)
    """
    # Konversi fitur ke DataFrame
    features_df = pd.DataFrame([features])
    
    # Preprocessor fitur
    processed_features = preprocessor.transform(features_df)
    
    # Prediksi
    prediction = model.predict(processed_features).flatten()[0]
    
    return prediction

# Contoh fitur rumah baru
new_house_features = {
    'MedInc': 3.5,          # Median income
    'HouseAge': 25.0,       # Usia rumah
    'AveRooms': 5.0,        # Rata-rata jumlah ruangan
    'AveBedrms': 2.0,       # Rata-rata jumlah kamar tidur
    'Population': 1500.0,   # Populasi
    'AveOccup': 3.0,        # Rata-rata penghuni
    'Latitude': 34.0,       # Latitude
    'Longitude': -118.0     # Longitude
}

# Prediksi harga
predicted_price = predict_house_price(model, preprocessor, new_house_features)
print(f"Harga rumah yang diprediksi: ${predicted_price * 100000:.2f}")

# Coba beberapa variasi input
variations = [
    {'MedInc': 8.0, 'HouseAge': 5.0, 'AveRooms': 7.0, 'AveBedrms': 3.0, 'Population': 1000.0, 'AveOccup': 2.5, 'Latitude': 34.0, 'Longitude': -118.0},
    {'MedInc': 2.0, 'HouseAge': 40.0, 'AveRooms': 4.0, 'AveBedrms': 1.5, 'Population': 2000.0, 'AveOccup': 4.0, 'Latitude': 34.0, 'Longitude': -118.0},
    {'MedInc': 5.0, 'HouseAge': 15.0, 'AveRooms': 6.0, 'AveBedrms': 2.0, 'Population': 1500.0, 'AveOccup': 3.0, 'Latitude': 37.0, 'Longitude': -122.0}
]

for i, var in enumerate(variations):
    price = predict_house_price(model, preprocessor, var)
    print(f"Variasi {i+1}: ${price * 100000:.2f}")
1/1 [==============================] - 0s 59ms/step
1/1 [==============================] - 0s 59ms/step
Harga rumah yang diprediksi: $232404.95
1/1 [==============================] - 0s 11ms/step
Harga rumah yang diprediksi: $232404.95
1/1 [==============================] - 0s 11ms/step
Variasi 1: $391771.60
1/1 [==============================] - 0s 12ms/step
Variasi 1: $391771.60
1/1 [==============================] - 0s 12ms/step
Variasi 2: $176614.80
1/1 [==============================] - 0s 11ms/step
Variasi 3: $277331.85
Variasi 2: $176614.80
1/1 [==============================] - 0s 11ms/step
Variasi 3: $277331.85

12. Kesimpulan dan Langkah Selanjutnya

Dalam notebook ini, kita telah: 1. Mengimplementasikan neural network untuk data tabular menggunakan TensorFlow 2. Melakukan exploratory data analysis pada dataset perumahan California 3. Memproses dan membagi data untuk training dan testing 4. Membangun, melatih, dan mengevaluasi model neural network 5. Melakukan eksperimen dengan hyperparameter untuk meningkatkan performa 6. Menyimpan dan memuat model yang telah dilatih 7. Menunjukkan cara membuat prediksi dengan model yang telah dilatih

Untuk peningkatan lebih lanjut, kita bisa: - Mencoba arsitektur model yang lebih kompleks - Menerapkan cross-validation untuk evaluasi yang lebih robust - Menggunakan teknik regularisasi yang lebih canggih - Mencoba teknik feature engineering lanjutan

9a. Eksperimen dengan Fungsi Aktivasi

Mari kita coba berbagai fungsi aktivasi untuk melihat bagaimana pengaruhnya terhadap performa model.

Fungsi aktivasi yang akan kita coba: 1. ReLU (Rectified Linear Unit) - Default 2. Sigmoid 3. Tanh (Hyperbolic Tangent) 4. ELU (Exponential Linear Unit) 5. LeakyReLU 6. SELU (Scaled Exponential Linear Unit)

# Fungsi untuk eksperimen dengan fungsi aktivasi
def experiment_activation(activation, custom_layer=None):
    """
    Eksperimen dengan berbagai fungsi aktivasi
    
    Parameters:
    -----------
    activation : str atau callable
        Nama fungsi aktivasi atau fungsi aktivasi khusus
    custom_layer : object, optional
        Layer khusus jika diperlukan (misalnya untuk LeakyReLU)
    
    Returns:
    --------
    dict : Dictionary berisi hasil eksperimen
    """
    # Nama untuk display
    if isinstance(activation, str):
        activation_name = activation
    else:
        activation_name = activation.__name__ if hasattr(activation, '__name__') else 'Custom'
    
    print(f"\nPerlatihan model dengan aktivasi: {activation_name}")
    
    # Membangun model dengan fungsi aktivasi yang diberikan
    if custom_layer:
        # Cara khusus untuk layer seperti LeakyReLU
        def build_custom_model(input_shape, layers=[64, 32], learning_rate=0.001, dropout_rate=0.2):
            model = tf.keras.Sequential()
            model.add(tf.keras.layers.InputLayer(input_shape=input_shape))
            
            for units in layers:
                model.add(tf.keras.layers.Dense(units))
                # Menambahkan layer aktivasi khusus
                model.add(custom_layer)
                model.add(tf.keras.layers.BatchNormalization())
                model.add(tf.keras.layers.Dropout(dropout_rate))
            
            model.add(tf.keras.layers.Dense(1))
            
            model.compile(
                optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),
                loss='mse',
                metrics=['mae']
            )
            return model
        
        model = build_custom_model(
            input_shape=(X_train_processed.shape[1],),
            layers=[128, 64],
            dropout_rate=0.3
        )
    else:
        # Untuk fungsi aktivasi standard
        model = build_model(
            input_shape=(X_train_processed.shape[1],),
            layers=[128, 64],
            activation=activation,
            dropout_rate=0.3
        )
    
    # Melatih model
    history = model.fit(
        X_train_processed,
        y_train,
        epochs=30,  # Mengurangi epochs untuk eksperimen
        batch_size=32,
        validation_split=0.2,
        callbacks=[early_stopping],
        verbose=1
    )
    
    # Evaluasi model
    y_pred = model.predict(X_test_processed).flatten()
    mse = mean_squared_error(y_test, y_pred)
    rmse = np.sqrt(mse)
    mae = mean_absolute_error(y_test, y_pred)
    r2 = r2_score(y_test, y_pred)
    
    return {
        'activation': activation_name,
        'mse': mse,
        'rmse': rmse,
        'mae': mae,
        'r2': r2,
        'best_val_loss': min(history.history['val_loss']),
        'best_val_mae': min(history.history['val_mae']),
        'epochs_trained': len(history.history['loss']),
        'history': history
    }

# Daftar fungsi aktivasi yang akan diuji
activation_experiments = [
    experiment_activation('relu'),
    experiment_activation('sigmoid'),
    experiment_activation('tanh'),
    experiment_activation('elu'),
    experiment_activation(None, custom_layer=tf.keras.layers.LeakyReLU(alpha=0.1)),
    experiment_activation('selu')
]

# Membuat DataFrame hasil
activation_results = pd.DataFrame([
    {k: v for k, v in exp.items() if k != 'history'} 
    for exp in activation_experiments
])

# Tampilkan hasil
print("\n=== Hasil Eksperimen Fungsi Aktivasi ===")
display(activation_results[['activation', 'rmse', 'r2', 'mae', 'epochs_trained']])

# Visualisasi perbandingan RMSE
plt.figure(figsize=(12, 6))
plt.bar(activation_results['activation'], activation_results['rmse'])
plt.title('RMSE untuk Berbagai Fungsi Aktivasi')
plt.ylabel('RMSE (Root Mean Squared Error)')
plt.xlabel('Fungsi Aktivasi')
plt.xticks(rotation=45)
plt.grid(axis='y')
plt.tight_layout()
plt.show()

# Visualisasi perbandingan R²
plt.figure(figsize=(12, 6))
plt.bar(activation_results['activation'], activation_results['r2'])
plt.title('R² Score untuk Berbagai Fungsi Aktivasi')
plt.ylabel('R² Score')
plt.xlabel('Fungsi Aktivasi')
plt.xticks(rotation=45)
plt.grid(axis='y')
plt.tight_layout()
plt.show()

# Plot learning curves untuk fungsi aktivasi terbaik dan terburuk
plt.figure(figsize=(15, 10))

# Mencari indeks fungsi aktivasi terbaik dan terburuk berdasarkan RMSE
best_idx = activation_results['rmse'].idxmin()
worst_idx = activation_results['rmse'].idxmax()

best_activation = activation_experiments[best_idx]
worst_activation = activation_experiments[worst_idx]

# Learning curve untuk loss
plt.subplot(2, 2, 1)
plt.plot(best_activation['history'].history['loss'], label=f"Training ({best_activation['activation']})")
plt.plot(best_activation['history'].history['val_loss'], label=f"Validation ({best_activation['activation']})")
plt.plot(worst_activation['history'].history['loss'], label=f"Training ({worst_activation['activation']})", linestyle='--')
plt.plot(worst_activation['history'].history['val_loss'], label=f"Validation ({worst_activation['activation']})", linestyle='--')
plt.title('Learning Curves: Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss (MSE)')
plt.legend()
plt.grid(True)

# Learning curve untuk MAE
plt.subplot(2, 2, 2)
plt.plot(best_activation['history'].history['mae'], label=f"Training ({best_activation['activation']})")
plt.plot(best_activation['history'].history['val_mae'], label=f"Validation ({best_activation['activation']})")
plt.plot(worst_activation['history'].history['mae'], label=f"Training ({worst_activation['activation']})", linestyle='--')
plt.plot(worst_activation['history'].history['val_mae'], label=f"Validation ({worst_activation['activation']})", linestyle='--')
plt.title('Learning Curves: MAE')
plt.xlabel('Epoch')
plt.ylabel('Mean Absolute Error')
plt.legend()
plt.grid(True)

# Visualisasi prediksi untuk fungsi aktivasi terbaik
plt.subplot(2, 2, 3)
plt.scatter(y_test, best_activation['history'].model.predict(X_test_processed).flatten(), alpha=0.5)
plt.plot([min(y_test), max(y_test)], [min(y_test), max(y_test)], color='red', linestyle='--')
plt.title(f'Prediksi dengan {best_activation["activation"]} (Terbaik)')
plt.xlabel('Nilai Sebenarnya')
plt.ylabel('Prediksi')
plt.grid(True)

# Visualisasi prediksi untuk fungsi aktivasi terburuk
plt.subplot(2, 2, 4)
plt.scatter(y_test, worst_activation['history'].model.predict(X_test_processed).flatten(), alpha=0.5)
plt.plot([min(y_test), max(y_test)], [min(y_test), max(y_test)], color='red', linestyle='--')
plt.title(f'Prediksi dengan {worst_activation["activation"]} (Terburuk)')
plt.xlabel('Nilai Sebenarnya')
plt.ylabel('Prediksi')
plt.grid(True)

plt.tight_layout()
plt.show()

print(f"\nFungsi aktivasi terbaik berdasarkan RMSE: {best_activation['activation']} (RMSE = {best_activation['rmse']:.4f})")
print(f"Fungsi aktivasi terburuk berdasarkan RMSE: {worst_activation['activation']} (RMSE = {worst_activation['rmse']:.4f})")

Perlatihan model dengan aktivasi: relu
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.4022 - mae: 1.1726 - val_loss: 0.6231 - val_mae: 0.6149
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.4022 - mae: 1.1726 - val_loss: 0.6231 - val_mae: 0.6149
Epoch 2/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8941 - mae: 0.7206 - val_loss: 0.5002 - val_mae: 0.5077
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8941 - mae: 0.7206 - val_loss: 0.5002 - val_mae: 0.5077
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6885 - mae: 0.6245 - val_loss: 0.4495 - val_mae: 0.4831
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6885 - mae: 0.6245 - val_loss: 0.4495 - val_mae: 0.4831
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5803 - mae: 0.5704 - val_loss: 0.4498 - val_mae: 0.4698
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5803 - mae: 0.5704 - val_loss: 0.4498 - val_mae: 0.4698
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5367 - mae: 0.5442 - val_loss: 0.4312 - val_mae: 0.4588
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5367 - mae: 0.5442 - val_loss: 0.4312 - val_mae: 0.4588
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4960 - mae: 0.5211 - val_loss: 0.4249 - val_mae: 0.4602
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4960 - mae: 0.5211 - val_loss: 0.4249 - val_mae: 0.4602
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4696 - mae: 0.5064 - val_loss: 0.4140 - val_mae: 0.4637
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4696 - mae: 0.5064 - val_loss: 0.4140 - val_mae: 0.4637
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4613 - mae: 0.4988 - val_loss: 0.3938 - val_mae: 0.4625
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4613 - mae: 0.4988 - val_loss: 0.3938 - val_mae: 0.4625
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4373 - mae: 0.4852 - val_loss: 0.3896 - val_mae: 0.4528
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4373 - mae: 0.4852 - val_loss: 0.3896 - val_mae: 0.4528
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4339 - mae: 0.4785 - val_loss: 0.3860 - val_mae: 0.4606
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4339 - mae: 0.4785 - val_loss: 0.3860 - val_mae: 0.4606
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4236 - mae: 0.4757 - val_loss: 0.3954 - val_mae: 0.4709
Epoch 12/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4236 - mae: 0.4757 - val_loss: 0.3954 - val_mae: 0.4709
Epoch 12/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4186 - mae: 0.4704 - val_loss: 0.3834 - val_mae: 0.4337
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4186 - mae: 0.4704 - val_loss: 0.3834 - val_mae: 0.4337
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4131 - mae: 0.4676 - val_loss: 0.4231 - val_mae: 0.4741
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4131 - mae: 0.4676 - val_loss: 0.4231 - val_mae: 0.4741
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4127 - mae: 0.4671 - val_loss: 0.3870 - val_mae: 0.4570
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4127 - mae: 0.4671 - val_loss: 0.3870 - val_mae: 0.4570
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4049 - mae: 0.4621 - val_loss: 0.3918 - val_mae: 0.4622
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4049 - mae: 0.4621 - val_loss: 0.3918 - val_mae: 0.4622
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3963 - mae: 0.4568 - val_loss: 0.3937 - val_mae: 0.4379
Epoch 17/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3963 - mae: 0.4568 - val_loss: 0.3937 - val_mae: 0.4379
Epoch 17/30
413/413 [==============================] - 0s 991us/step - loss: 0.3984 - mae: 0.4593 - val_loss: 0.3552 - val_mae: 0.4253
Epoch 18/30
413/413 [==============================] - 0s 991us/step - loss: 0.3984 - mae: 0.4593 - val_loss: 0.3552 - val_mae: 0.4253
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3905 - mae: 0.4530 - val_loss: 0.3860 - val_mae: 0.4378
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3905 - mae: 0.4530 - val_loss: 0.3860 - val_mae: 0.4378
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3894 - mae: 0.4535 - val_loss: 0.3659 - val_mae: 0.4279
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3894 - mae: 0.4535 - val_loss: 0.3659 - val_mae: 0.4279
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3876 - mae: 0.4503 - val_loss: 0.3960 - val_mae: 0.4447
Epoch 21/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3876 - mae: 0.4503 - val_loss: 0.3960 - val_mae: 0.4447
Epoch 21/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3730 - mae: 0.4412 - val_loss: 0.3715 - val_mae: 0.4353
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3730 - mae: 0.4412 - val_loss: 0.3715 - val_mae: 0.4353
Epoch 22/30
413/413 [==============================] - 0s 988us/step - loss: 0.3772 - mae: 0.4430 - val_loss: 0.4138 - val_mae: 0.4695
Epoch 23/30
413/413 [==============================] - 0s 988us/step - loss: 0.3772 - mae: 0.4430 - val_loss: 0.4138 - val_mae: 0.4695
Epoch 23/30
413/413 [==============================] - 0s 995us/step - loss: 0.3758 - mae: 0.4423 - val_loss: 0.3739 - val_mae: 0.4607
Epoch 24/30
413/413 [==============================] - 0s 995us/step - loss: 0.3758 - mae: 0.4423 - val_loss: 0.3739 - val_mae: 0.4607
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3745 - mae: 0.4401 - val_loss: 0.3905 - val_mae: 0.4562
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3745 - mae: 0.4401 - val_loss: 0.3905 - val_mae: 0.4562
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3719 - mae: 0.4428 - val_loss: 0.3745 - val_mae: 0.4399
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3719 - mae: 0.4428 - val_loss: 0.3745 - val_mae: 0.4399
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3697 - mae: 0.4388 - val_loss: 0.3634 - val_mae: 0.4412
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3697 - mae: 0.4388 - val_loss: 0.3634 - val_mae: 0.4412
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3726 - mae: 0.4401 - val_loss: 0.3437 - val_mae: 0.4196
Epoch 28/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3726 - mae: 0.4401 - val_loss: 0.3437 - val_mae: 0.4196
Epoch 28/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3724 - mae: 0.4395 - val_loss: 0.3512 - val_mae: 0.4149
Epoch 29/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3724 - mae: 0.4395 - val_loss: 0.3512 - val_mae: 0.4149
Epoch 29/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3721 - mae: 0.4407 - val_loss: 0.3398 - val_mae: 0.4317
Epoch 30/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3721 - mae: 0.4407 - val_loss: 0.3398 - val_mae: 0.4317
Epoch 30/30
413/413 [==============================] - 0s 996us/step - loss: 0.3671 - mae: 0.4390 - val_loss: 0.3953 - val_mae: 0.4564
413/413 [==============================] - 0s 996us/step - loss: 0.3671 - mae: 0.4390 - val_loss: 0.3953 - val_mae: 0.4564
129/129 [==============================] - 0s 508us/step
129/129 [==============================] - 0s 508us/step

Perlatihan model dengan aktivasi: sigmoid

Perlatihan model dengan aktivasi: sigmoid
Epoch 1/30
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.1219 - mae: 1.1246 - val_loss: 0.7364 - val_mae: 0.6205
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.1219 - mae: 1.1246 - val_loss: 0.7364 - val_mae: 0.6205
Epoch 2/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8569 - mae: 0.7063 - val_loss: 0.4826 - val_mae: 0.4767
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8569 - mae: 0.7063 - val_loss: 0.4826 - val_mae: 0.4767
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6910 - mae: 0.6260 - val_loss: 0.4841 - val_mae: 0.4872
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6910 - mae: 0.6260 - val_loss: 0.4841 - val_mae: 0.4872
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5954 - mae: 0.5776 - val_loss: 0.4507 - val_mae: 0.4632
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5954 - mae: 0.5776 - val_loss: 0.4507 - val_mae: 0.4632
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5533 - mae: 0.5523 - val_loss: 0.4380 - val_mae: 0.4549
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5533 - mae: 0.5523 - val_loss: 0.4380 - val_mae: 0.4549
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5204 - mae: 0.5307 - val_loss: 0.4273 - val_mae: 0.4432
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5204 - mae: 0.5307 - val_loss: 0.4273 - val_mae: 0.4432
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4899 - mae: 0.5132 - val_loss: 0.4081 - val_mae: 0.4386
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4899 - mae: 0.5132 - val_loss: 0.4081 - val_mae: 0.4386
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4784 - mae: 0.5044 - val_loss: 0.3971 - val_mae: 0.4328
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4784 - mae: 0.5044 - val_loss: 0.3971 - val_mae: 0.4328
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4608 - mae: 0.4942 - val_loss: 0.3945 - val_mae: 0.4308
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4608 - mae: 0.4942 - val_loss: 0.3945 - val_mae: 0.4308
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4571 - mae: 0.4879 - val_loss: 0.3885 - val_mae: 0.4305
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4571 - mae: 0.4879 - val_loss: 0.3885 - val_mae: 0.4305
Epoch 11/30
413/413 [==============================] - 0s 995us/step - loss: 0.4481 - mae: 0.4865 - val_loss: 0.3859 - val_mae: 0.4321
Epoch 12/30
413/413 [==============================] - 0s 995us/step - loss: 0.4481 - mae: 0.4865 - val_loss: 0.3859 - val_mae: 0.4321
Epoch 12/30
413/413 [==============================] - 0s 999us/step - loss: 0.4442 - mae: 0.4817 - val_loss: 0.3797 - val_mae: 0.4190
Epoch 13/30
413/413 [==============================] - 0s 999us/step - loss: 0.4442 - mae: 0.4817 - val_loss: 0.3797 - val_mae: 0.4190
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4361 - mae: 0.4756 - val_loss: 0.4050 - val_mae: 0.4429
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4361 - mae: 0.4756 - val_loss: 0.4050 - val_mae: 0.4429
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4345 - mae: 0.4758 - val_loss: 0.3704 - val_mae: 0.4177
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4345 - mae: 0.4758 - val_loss: 0.3704 - val_mae: 0.4177
Epoch 15/30
413/413 [==============================] - 0s 993us/step - loss: 0.4273 - mae: 0.4701 - val_loss: 0.3643 - val_mae: 0.4172
Epoch 16/30
413/413 [==============================] - 0s 993us/step - loss: 0.4273 - mae: 0.4701 - val_loss: 0.3643 - val_mae: 0.4172
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4155 - mae: 0.4654 - val_loss: 0.3722 - val_mae: 0.4171
Epoch 17/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4155 - mae: 0.4654 - val_loss: 0.3722 - val_mae: 0.4171
Epoch 17/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4194 - mae: 0.4658 - val_loss: 0.3624 - val_mae: 0.4142
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4194 - mae: 0.4658 - val_loss: 0.3624 - val_mae: 0.4142
Epoch 18/30
413/413 [==============================] - 0s 982us/step - loss: 0.4109 - mae: 0.4600 - val_loss: 0.3646 - val_mae: 0.4100
Epoch 19/30
413/413 [==============================] - 0s 982us/step - loss: 0.4109 - mae: 0.4600 - val_loss: 0.3646 - val_mae: 0.4100
Epoch 19/30
413/413 [==============================] - 0s 981us/step - loss: 0.4142 - mae: 0.4633 - val_loss: 0.3567 - val_mae: 0.4075
Epoch 20/30
413/413 [==============================] - 0s 981us/step - loss: 0.4142 - mae: 0.4633 - val_loss: 0.3567 - val_mae: 0.4075
Epoch 20/30
413/413 [==============================] - 0s 989us/step - loss: 0.4101 - mae: 0.4595 - val_loss: 0.3600 - val_mae: 0.4085
Epoch 21/30
413/413 [==============================] - 0s 989us/step - loss: 0.4101 - mae: 0.4595 - val_loss: 0.3600 - val_mae: 0.4085
Epoch 21/30
413/413 [==============================] - 0s 991us/step - loss: 0.4012 - mae: 0.4545 - val_loss: 0.3515 - val_mae: 0.4091
Epoch 22/30
413/413 [==============================] - 0s 991us/step - loss: 0.4012 - mae: 0.4545 - val_loss: 0.3515 - val_mae: 0.4091
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3992 - mae: 0.4532 - val_loss: 0.3732 - val_mae: 0.4184
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3992 - mae: 0.4532 - val_loss: 0.3732 - val_mae: 0.4184
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4027 - mae: 0.4546 - val_loss: 0.3497 - val_mae: 0.4104
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4027 - mae: 0.4546 - val_loss: 0.3497 - val_mae: 0.4104
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4063 - mae: 0.4561 - val_loss: 0.3516 - val_mae: 0.4077
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4063 - mae: 0.4561 - val_loss: 0.3516 - val_mae: 0.4077
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4035 - mae: 0.4581 - val_loss: 0.3487 - val_mae: 0.3999
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4035 - mae: 0.4581 - val_loss: 0.3487 - val_mae: 0.3999
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3971 - mae: 0.4527 - val_loss: 0.3472 - val_mae: 0.4044
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3971 - mae: 0.4527 - val_loss: 0.3472 - val_mae: 0.4044
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4043 - mae: 0.4580 - val_loss: 0.3447 - val_mae: 0.4028
Epoch 28/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4043 - mae: 0.4580 - val_loss: 0.3447 - val_mae: 0.4028
Epoch 28/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4005 - mae: 0.4539 - val_loss: 0.3500 - val_mae: 0.4003
Epoch 29/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4005 - mae: 0.4539 - val_loss: 0.3500 - val_mae: 0.4003
Epoch 29/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4041 - mae: 0.4567 - val_loss: 0.3651 - val_mae: 0.4252
Epoch 30/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4041 - mae: 0.4567 - val_loss: 0.3651 - val_mae: 0.4252
Epoch 30/30
413/413 [==============================] - 0s 996us/step - loss: 0.4009 - mae: 0.4549 - val_loss: 0.3456 - val_mae: 0.4116
413/413 [==============================] - 0s 996us/step - loss: 0.4009 - mae: 0.4549 - val_loss: 0.3456 - val_mae: 0.4116
129/129 [==============================] - 0s 518us/step
129/129 [==============================] - 0s 518us/step

Perlatihan model dengan aktivasi: tanh

Perlatihan model dengan aktivasi: tanh
Epoch 1/30
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2241 - mae: 1.1482 - val_loss: 0.5540 - val_mae: 0.5192
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2241 - mae: 1.1482 - val_loss: 0.5540 - val_mae: 0.5192
Epoch 2/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8883 - mae: 0.7245 - val_loss: 0.4836 - val_mae: 0.4866
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8883 - mae: 0.7245 - val_loss: 0.4836 - val_mae: 0.4866
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6946 - mae: 0.6312 - val_loss: 0.4563 - val_mae: 0.4743
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6946 - mae: 0.6312 - val_loss: 0.4563 - val_mae: 0.4743
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6074 - mae: 0.5835 - val_loss: 0.4346 - val_mae: 0.4623
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6074 - mae: 0.5835 - val_loss: 0.4346 - val_mae: 0.4623
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5515 - mae: 0.5530 - val_loss: 0.4320 - val_mae: 0.4565
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5515 - mae: 0.5530 - val_loss: 0.4320 - val_mae: 0.4565
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5177 - mae: 0.5343 - val_loss: 0.4178 - val_mae: 0.4392
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5177 - mae: 0.5343 - val_loss: 0.4178 - val_mae: 0.4392
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4870 - mae: 0.5130 - val_loss: 0.3916 - val_mae: 0.4277
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4870 - mae: 0.5130 - val_loss: 0.3916 - val_mae: 0.4277
Epoch 8/30
413/413 [==============================] - 0s 988us/step - loss: 0.4730 - mae: 0.5022 - val_loss: 0.3777 - val_mae: 0.4186
Epoch 9/30
413/413 [==============================] - 0s 988us/step - loss: 0.4730 - mae: 0.5022 - val_loss: 0.3777 - val_mae: 0.4186
Epoch 9/30
413/413 [==============================] - 0s 986us/step - loss: 0.4518 - mae: 0.4907 - val_loss: 0.3745 - val_mae: 0.4163
Epoch 10/30
413/413 [==============================] - 0s 986us/step - loss: 0.4518 - mae: 0.4907 - val_loss: 0.3745 - val_mae: 0.4163
Epoch 10/30
413/413 [==============================] - 0s 992us/step - loss: 0.4554 - mae: 0.4863 - val_loss: 0.3663 - val_mae: 0.4180
Epoch 11/30
413/413 [==============================] - 0s 992us/step - loss: 0.4554 - mae: 0.4863 - val_loss: 0.3663 - val_mae: 0.4180
Epoch 11/30
413/413 [==============================] - 0s 975us/step - loss: 0.4419 - mae: 0.4821 - val_loss: 0.3622 - val_mae: 0.4157
Epoch 12/30
413/413 [==============================] - 0s 975us/step - loss: 0.4419 - mae: 0.4821 - val_loss: 0.3622 - val_mae: 0.4157
Epoch 12/30
413/413 [==============================] - 0s 975us/step - loss: 0.4416 - mae: 0.4812 - val_loss: 0.3648 - val_mae: 0.4113
Epoch 13/30
413/413 [==============================] - 0s 975us/step - loss: 0.4416 - mae: 0.4812 - val_loss: 0.3648 - val_mae: 0.4113
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4336 - mae: 0.4747 - val_loss: 0.3927 - val_mae: 0.4342
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4336 - mae: 0.4747 - val_loss: 0.3927 - val_mae: 0.4342
Epoch 14/30
413/413 [==============================] - 0s 991us/step - loss: 0.4333 - mae: 0.4759 - val_loss: 0.3588 - val_mae: 0.4083
Epoch 15/30
413/413 [==============================] - 0s 991us/step - loss: 0.4333 - mae: 0.4759 - val_loss: 0.3588 - val_mae: 0.4083
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4271 - mae: 0.4712 - val_loss: 0.3530 - val_mae: 0.4123
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4271 - mae: 0.4712 - val_loss: 0.3530 - val_mae: 0.4123
Epoch 16/30
413/413 [==============================] - 0s 985us/step - loss: 0.4147 - mae: 0.4664 - val_loss: 0.3667 - val_mae: 0.4146
Epoch 17/30
413/413 [==============================] - 0s 985us/step - loss: 0.4147 - mae: 0.4664 - val_loss: 0.3667 - val_mae: 0.4146
Epoch 17/30
413/413 [==============================] - 0s 976us/step - loss: 0.4182 - mae: 0.4655 - val_loss: 0.3540 - val_mae: 0.4075
Epoch 18/30
413/413 [==============================] - 0s 976us/step - loss: 0.4182 - mae: 0.4655 - val_loss: 0.3540 - val_mae: 0.4075
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4144 - mae: 0.4638 - val_loss: 0.3666 - val_mae: 0.4092
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4144 - mae: 0.4638 - val_loss: 0.3666 - val_mae: 0.4092
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4189 - mae: 0.4665 - val_loss: 0.3536 - val_mae: 0.4090
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4189 - mae: 0.4665 - val_loss: 0.3536 - val_mae: 0.4090
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4116 - mae: 0.4617 - val_loss: 0.3654 - val_mae: 0.4087
Epoch 21/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4116 - mae: 0.4617 - val_loss: 0.3654 - val_mae: 0.4087
Epoch 21/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4005 - mae: 0.4556 - val_loss: 0.3527 - val_mae: 0.4070
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4005 - mae: 0.4556 - val_loss: 0.3527 - val_mae: 0.4070
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4026 - mae: 0.4527 - val_loss: 0.3598 - val_mae: 0.4076
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4026 - mae: 0.4527 - val_loss: 0.3598 - val_mae: 0.4076
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4035 - mae: 0.4558 - val_loss: 0.3419 - val_mae: 0.4041
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4035 - mae: 0.4558 - val_loss: 0.3419 - val_mae: 0.4041
Epoch 24/30
413/413 [==============================] - 0s 991us/step - loss: 0.4000 - mae: 0.4542 - val_loss: 0.3435 - val_mae: 0.4060
Epoch 25/30
413/413 [==============================] - 0s 991us/step - loss: 0.4000 - mae: 0.4542 - val_loss: 0.3435 - val_mae: 0.4060
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4031 - mae: 0.4583 - val_loss: 0.3401 - val_mae: 0.3976
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4031 - mae: 0.4583 - val_loss: 0.3401 - val_mae: 0.3976
Epoch 26/30
413/413 [==============================] - 0s 998us/step - loss: 0.3953 - mae: 0.4531 - val_loss: 0.3438 - val_mae: 0.4043
Epoch 27/30
413/413 [==============================] - 0s 998us/step - loss: 0.3953 - mae: 0.4531 - val_loss: 0.3438 - val_mae: 0.4043
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4018 - mae: 0.4551 - val_loss: 0.3402 - val_mae: 0.3983
Epoch 28/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4018 - mae: 0.4551 - val_loss: 0.3402 - val_mae: 0.3983
Epoch 28/30
413/413 [==============================] - 0s 980us/step - loss: 0.3987 - mae: 0.4520 - val_loss: 0.3432 - val_mae: 0.3961
Epoch 29/30
413/413 [==============================] - 0s 980us/step - loss: 0.3987 - mae: 0.4520 - val_loss: 0.3432 - val_mae: 0.3961
Epoch 29/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4015 - mae: 0.4547 - val_loss: 0.3402 - val_mae: 0.4051
Epoch 30/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4015 - mae: 0.4547 - val_loss: 0.3402 - val_mae: 0.4051
Epoch 30/30
413/413 [==============================] - 0s 1ms/step - loss: 0.3977 - mae: 0.4527 - val_loss: 0.3361 - val_mae: 0.3994
413/413 [==============================] - 0s 1ms/step - loss: 0.3977 - mae: 0.4527 - val_loss: 0.3361 - val_mae: 0.3994
129/129 [==============================] - 0s 517us/step
129/129 [==============================] - 0s 517us/step

Perlatihan model dengan aktivasi: elu

Perlatihan model dengan aktivasi: elu
Epoch 1/30
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2439 - mae: 1.1355 - val_loss: 0.6349 - val_mae: 0.5481
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2439 - mae: 1.1355 - val_loss: 0.6349 - val_mae: 0.5481
Epoch 2/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8877 - mae: 0.7119 - val_loss: 0.5147 - val_mae: 0.4780
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8877 - mae: 0.7119 - val_loss: 0.5147 - val_mae: 0.4780
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6792 - mae: 0.6168 - val_loss: 0.4610 - val_mae: 0.4714
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6792 - mae: 0.6168 - val_loss: 0.4610 - val_mae: 0.4714
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5921 - mae: 0.5741 - val_loss: 0.4530 - val_mae: 0.4542
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5921 - mae: 0.5741 - val_loss: 0.4530 - val_mae: 0.4542
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5508 - mae: 0.5506 - val_loss: 0.4535 - val_mae: 0.4621
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5508 - mae: 0.5506 - val_loss: 0.4535 - val_mae: 0.4621
Epoch 6/30
413/413 [==============================] - 0s 985us/step - loss: 0.5206 - mae: 0.5334 - val_loss: 0.4633 - val_mae: 0.4681
Epoch 7/30
413/413 [==============================] - 0s 985us/step - loss: 0.5206 - mae: 0.5334 - val_loss: 0.4633 - val_mae: 0.4681
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4917 - mae: 0.5146 - val_loss: 0.4323 - val_mae: 0.4637
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4917 - mae: 0.5146 - val_loss: 0.4323 - val_mae: 0.4637
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4856 - mae: 0.5084 - val_loss: 0.4226 - val_mae: 0.4626
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4856 - mae: 0.5084 - val_loss: 0.4226 - val_mae: 0.4626
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4646 - mae: 0.4979 - val_loss: 0.4444 - val_mae: 0.4677
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4646 - mae: 0.4979 - val_loss: 0.4444 - val_mae: 0.4677
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4652 - mae: 0.4945 - val_loss: 0.4543 - val_mae: 0.4709
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4652 - mae: 0.4945 - val_loss: 0.4543 - val_mae: 0.4709
Epoch 11/30
413/413 [==============================] - 0s 991us/step - loss: 0.4532 - mae: 0.4905 - val_loss: 0.4374 - val_mae: 0.4721
Epoch 12/30
413/413 [==============================] - 0s 991us/step - loss: 0.4532 - mae: 0.4905 - val_loss: 0.4374 - val_mae: 0.4721
Epoch 12/30
413/413 [==============================] - 0s 996us/step - loss: 0.4543 - mae: 0.4910 - val_loss: 0.4263 - val_mae: 0.4530
Epoch 13/30
413/413 [==============================] - 0s 996us/step - loss: 0.4543 - mae: 0.4910 - val_loss: 0.4263 - val_mae: 0.4530
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4451 - mae: 0.4838 - val_loss: 0.4808 - val_mae: 0.4893
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4451 - mae: 0.4838 - val_loss: 0.4808 - val_mae: 0.4893
Epoch 14/30
413/413 [==============================] - 0s 991us/step - loss: 0.4500 - mae: 0.4874 - val_loss: 0.4318 - val_mae: 0.4611
Epoch 15/30
413/413 [==============================] - 0s 991us/step - loss: 0.4500 - mae: 0.4874 - val_loss: 0.4318 - val_mae: 0.4611
Epoch 15/30
413/413 [==============================] - 0s 993us/step - loss: 0.4410 - mae: 0.4811 - val_loss: 0.4498 - val_mae: 0.4770
Epoch 16/30
413/413 [==============================] - 0s 993us/step - loss: 0.4410 - mae: 0.4811 - val_loss: 0.4498 - val_mae: 0.4770
Epoch 16/30
413/413 [==============================] - 0s 999us/step - loss: 0.4323 - mae: 0.4783 - val_loss: 0.4442 - val_mae: 0.4614
Epoch 17/30
413/413 [==============================] - 0s 999us/step - loss: 0.4323 - mae: 0.4783 - val_loss: 0.4442 - val_mae: 0.4614
Epoch 17/30
413/413 [==============================] - 0s 987us/step - loss: 0.4320 - mae: 0.4782 - val_loss: 0.4143 - val_mae: 0.4464
Epoch 18/30
413/413 [==============================] - 0s 987us/step - loss: 0.4320 - mae: 0.4782 - val_loss: 0.4143 - val_mae: 0.4464
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4342 - mae: 0.4779 - val_loss: 0.4445 - val_mae: 0.4582
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4342 - mae: 0.4779 - val_loss: 0.4445 - val_mae: 0.4582
Epoch 19/30
413/413 [==============================] - 0s 994us/step - loss: 0.4306 - mae: 0.4774 - val_loss: 0.4383 - val_mae: 0.4634
Epoch 20/30
413/413 [==============================] - 0s 994us/step - loss: 0.4306 - mae: 0.4774 - val_loss: 0.4383 - val_mae: 0.4634
Epoch 20/30
413/413 [==============================] - 0s 979us/step - loss: 0.4258 - mae: 0.4724 - val_loss: 0.4819 - val_mae: 0.4796
Epoch 21/30
413/413 [==============================] - 0s 979us/step - loss: 0.4258 - mae: 0.4724 - val_loss: 0.4819 - val_mae: 0.4796
Epoch 21/30
413/413 [==============================] - 0s 996us/step - loss: 0.4186 - mae: 0.4690 - val_loss: 0.4442 - val_mae: 0.4670
Epoch 22/30
413/413 [==============================] - 0s 996us/step - loss: 0.4186 - mae: 0.4690 - val_loss: 0.4442 - val_mae: 0.4670
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4173 - mae: 0.4686 - val_loss: 0.5314 - val_mae: 0.5142
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4173 - mae: 0.4686 - val_loss: 0.5314 - val_mae: 0.5142
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4217 - mae: 0.4717 - val_loss: 0.4456 - val_mae: 0.4728
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4217 - mae: 0.4717 - val_loss: 0.4456 - val_mae: 0.4728
Epoch 24/30
413/413 [==============================] - 0s 981us/step - loss: 0.4180 - mae: 0.4692 - val_loss: 0.4917 - val_mae: 0.4955
Epoch 25/30
413/413 [==============================] - 0s 981us/step - loss: 0.4180 - mae: 0.4692 - val_loss: 0.4917 - val_mae: 0.4955
Epoch 25/30
413/413 [==============================] - 0s 948us/step - loss: 0.4231 - mae: 0.4735 - val_loss: 0.4873 - val_mae: 0.4834
Epoch 26/30
413/413 [==============================] - 0s 948us/step - loss: 0.4231 - mae: 0.4735 - val_loss: 0.4873 - val_mae: 0.4834
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4116 - mae: 0.4663 - val_loss: 0.4552 - val_mae: 0.4741
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4116 - mae: 0.4663 - val_loss: 0.4552 - val_mae: 0.4741
Epoch 27/30
413/413 [==============================] - 0s 970us/step - loss: 0.4206 - mae: 0.4712 - val_loss: 0.4273 - val_mae: 0.4631
413/413 [==============================] - 0s 970us/step - loss: 0.4206 - mae: 0.4712 - val_loss: 0.4273 - val_mae: 0.4631
129/129 [==============================] - 0s 492us/step
129/129 [==============================] - 0s 492us/step

Perlatihan model dengan aktivasi: Custom

Perlatihan model dengan aktivasi: Custom
Epoch 1/30
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2315 - mae: 1.1440 - val_loss: 0.6812 - val_mae: 0.5650
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2315 - mae: 1.1440 - val_loss: 0.6812 - val_mae: 0.5650
Epoch 2/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8613 - mae: 0.6989 - val_loss: 0.5003 - val_mae: 0.4808
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.8613 - mae: 0.6989 - val_loss: 0.5003 - val_mae: 0.4808
Epoch 3/30
413/413 [==============================] - 0s 949us/step - loss: 0.6461 - mae: 0.6041 - val_loss: 0.5156 - val_mae: 0.5127
Epoch 4/30
413/413 [==============================] - 0s 949us/step - loss: 0.6461 - mae: 0.6041 - val_loss: 0.5156 - val_mae: 0.5127
Epoch 4/30
413/413 [==============================] - 0s 957us/step - loss: 0.5576 - mae: 0.5550 - val_loss: 0.4376 - val_mae: 0.4536
Epoch 5/30
413/413 [==============================] - 0s 957us/step - loss: 0.5576 - mae: 0.5550 - val_loss: 0.4376 - val_mae: 0.4536
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5153 - mae: 0.5290 - val_loss: 0.4380 - val_mae: 0.4596
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5153 - mae: 0.5290 - val_loss: 0.4380 - val_mae: 0.4596
Epoch 6/30
413/413 [==============================] - 0s 945us/step - loss: 0.4897 - mae: 0.5132 - val_loss: 0.4775 - val_mae: 0.4755
Epoch 7/30
413/413 [==============================] - 0s 945us/step - loss: 0.4897 - mae: 0.5132 - val_loss: 0.4775 - val_mae: 0.4755
Epoch 7/30
413/413 [==============================] - 0s 983us/step - loss: 0.4671 - mae: 0.5014 - val_loss: 0.4600 - val_mae: 0.4813
Epoch 8/30
413/413 [==============================] - 0s 983us/step - loss: 0.4671 - mae: 0.5014 - val_loss: 0.4600 - val_mae: 0.4813
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4543 - mae: 0.4910 - val_loss: 0.4378 - val_mae: 0.4857
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4543 - mae: 0.4910 - val_loss: 0.4378 - val_mae: 0.4857
Epoch 9/30
413/413 [==============================] - 0s 940us/step - loss: 0.4377 - mae: 0.4831 - val_loss: 0.4547 - val_mae: 0.4780
Epoch 10/30
413/413 [==============================] - 0s 940us/step - loss: 0.4377 - mae: 0.4831 - val_loss: 0.4547 - val_mae: 0.4780
Epoch 10/30
413/413 [==============================] - 0s 957us/step - loss: 0.4438 - mae: 0.4819 - val_loss: 0.4875 - val_mae: 0.5186
Epoch 11/30
413/413 [==============================] - 0s 957us/step - loss: 0.4438 - mae: 0.4819 - val_loss: 0.4875 - val_mae: 0.5186
Epoch 11/30
413/413 [==============================] - 0s 948us/step - loss: 0.4288 - mae: 0.4768 - val_loss: 0.4992 - val_mae: 0.5273
Epoch 12/30
413/413 [==============================] - 0s 948us/step - loss: 0.4288 - mae: 0.4768 - val_loss: 0.4992 - val_mae: 0.5273
Epoch 12/30
413/413 [==============================] - 0s 948us/step - loss: 0.4244 - mae: 0.4729 - val_loss: 0.4409 - val_mae: 0.4600
Epoch 13/30
413/413 [==============================] - 0s 948us/step - loss: 0.4244 - mae: 0.4729 - val_loss: 0.4409 - val_mae: 0.4600
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4226 - mae: 0.4714 - val_loss: 0.4920 - val_mae: 0.5004
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4226 - mae: 0.4714 - val_loss: 0.4920 - val_mae: 0.5004
Epoch 14/30
413/413 [==============================] - 0s 947us/step - loss: 0.4195 - mae: 0.4665 - val_loss: 0.4469 - val_mae: 0.4902
413/413 [==============================] - 0s 947us/step - loss: 0.4195 - mae: 0.4665 - val_loss: 0.4469 - val_mae: 0.4902
129/129 [==============================] - 0s 516us/step
129/129 [==============================] - 0s 516us/step

Perlatihan model dengan aktivasi: selu
Epoch 1/30

Perlatihan model dengan aktivasi: selu
Epoch 1/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2124 - mae: 1.1393 - val_loss: 0.6027 - val_mae: 0.5101
Epoch 2/30
413/413 [==============================] - 1s 1ms/step - loss: 2.2124 - mae: 1.1393 - val_loss: 0.6027 - val_mae: 0.5101
Epoch 2/30
413/413 [==============================] - 0s 989us/step - loss: 0.8677 - mae: 0.7085 - val_loss: 0.5013 - val_mae: 0.4756
Epoch 3/30
413/413 [==============================] - 0s 989us/step - loss: 0.8677 - mae: 0.7085 - val_loss: 0.5013 - val_mae: 0.4756
Epoch 3/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6779 - mae: 0.6192 - val_loss: 0.4749 - val_mae: 0.4656
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.6779 - mae: 0.6192 - val_loss: 0.4749 - val_mae: 0.4656
Epoch 4/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5940 - mae: 0.5746 - val_loss: 0.4478 - val_mae: 0.4515
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5940 - mae: 0.5746 - val_loss: 0.4478 - val_mae: 0.4515
Epoch 5/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5442 - mae: 0.5453 - val_loss: 0.4457 - val_mae: 0.4581
Epoch 6/30
413/413 [==============================] - 0s 1ms/step - loss: 0.5442 - mae: 0.5453 - val_loss: 0.4457 - val_mae: 0.4581
Epoch 6/30
413/413 [==============================] - 0s 996us/step - loss: 0.5165 - mae: 0.5309 - val_loss: 0.4455 - val_mae: 0.4554
Epoch 7/30
413/413 [==============================] - 0s 996us/step - loss: 0.5165 - mae: 0.5309 - val_loss: 0.4455 - val_mae: 0.4554
Epoch 7/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4837 - mae: 0.5095 - val_loss: 0.4249 - val_mae: 0.4535
Epoch 8/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4837 - mae: 0.5095 - val_loss: 0.4249 - val_mae: 0.4535
Epoch 8/30
413/413 [==============================] - 0s 994us/step - loss: 0.4816 - mae: 0.5073 - val_loss: 0.4093 - val_mae: 0.4501
Epoch 9/30
413/413 [==============================] - 0s 994us/step - loss: 0.4816 - mae: 0.5073 - val_loss: 0.4093 - val_mae: 0.4501
Epoch 9/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4603 - mae: 0.4951 - val_loss: 0.4252 - val_mae: 0.4507
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4603 - mae: 0.4951 - val_loss: 0.4252 - val_mae: 0.4507
Epoch 10/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4612 - mae: 0.4920 - val_loss: 0.4299 - val_mae: 0.4550
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4612 - mae: 0.4920 - val_loss: 0.4299 - val_mae: 0.4550
Epoch 11/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4520 - mae: 0.4892 - val_loss: 0.4201 - val_mae: 0.4575
Epoch 12/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4520 - mae: 0.4892 - val_loss: 0.4201 - val_mae: 0.4575
Epoch 12/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4459 - mae: 0.4849 - val_loss: 0.4291 - val_mae: 0.4433
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4459 - mae: 0.4849 - val_loss: 0.4291 - val_mae: 0.4433
Epoch 13/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4360 - mae: 0.4782 - val_loss: 0.4517 - val_mae: 0.4651
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4360 - mae: 0.4782 - val_loss: 0.4517 - val_mae: 0.4651
Epoch 14/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4376 - mae: 0.4800 - val_loss: 0.4265 - val_mae: 0.4489
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4376 - mae: 0.4800 - val_loss: 0.4265 - val_mae: 0.4489
Epoch 15/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4312 - mae: 0.4766 - val_loss: 0.4357 - val_mae: 0.4583
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4312 - mae: 0.4766 - val_loss: 0.4357 - val_mae: 0.4583
Epoch 16/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4278 - mae: 0.4750 - val_loss: 0.4344 - val_mae: 0.4541
Epoch 17/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4278 - mae: 0.4750 - val_loss: 0.4344 - val_mae: 0.4541
Epoch 17/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4308 - mae: 0.4754 - val_loss: 0.4078 - val_mae: 0.4389
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4308 - mae: 0.4754 - val_loss: 0.4078 - val_mae: 0.4389
Epoch 18/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4248 - mae: 0.4705 - val_loss: 0.4394 - val_mae: 0.4484
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4248 - mae: 0.4705 - val_loss: 0.4394 - val_mae: 0.4484
Epoch 19/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4249 - mae: 0.4726 - val_loss: 0.4253 - val_mae: 0.4468
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4249 - mae: 0.4726 - val_loss: 0.4253 - val_mae: 0.4468
Epoch 20/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4211 - mae: 0.4688 - val_loss: 0.4530 - val_mae: 0.4563
Epoch 21/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4211 - mae: 0.4688 - val_loss: 0.4530 - val_mae: 0.4563
Epoch 21/30
413/413 [==============================] - 0s 993us/step - loss: 0.4091 - mae: 0.4621 - val_loss: 0.4362 - val_mae: 0.4531
Epoch 22/30
413/413 [==============================] - 0s 993us/step - loss: 0.4091 - mae: 0.4621 - val_loss: 0.4362 - val_mae: 0.4531
Epoch 22/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4112 - mae: 0.4639 - val_loss: 0.4845 - val_mae: 0.4821
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4112 - mae: 0.4639 - val_loss: 0.4845 - val_mae: 0.4821
Epoch 23/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4141 - mae: 0.4655 - val_loss: 0.4220 - val_mae: 0.4501
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4141 - mae: 0.4655 - val_loss: 0.4220 - val_mae: 0.4501
Epoch 24/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4112 - mae: 0.4623 - val_loss: 0.4459 - val_mae: 0.4638
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4112 - mae: 0.4623 - val_loss: 0.4459 - val_mae: 0.4638
Epoch 25/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4131 - mae: 0.4657 - val_loss: 0.4361 - val_mae: 0.4493
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4131 - mae: 0.4657 - val_loss: 0.4361 - val_mae: 0.4493
Epoch 26/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4052 - mae: 0.4608 - val_loss: 0.4237 - val_mae: 0.4460
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4052 - mae: 0.4608 - val_loss: 0.4237 - val_mae: 0.4460
Epoch 27/30
413/413 [==============================] - 0s 1ms/step - loss: 0.4145 - mae: 0.4672 - val_loss: 0.4086 - val_mae: 0.4414
413/413 [==============================] - 0s 1ms/step - loss: 0.4145 - mae: 0.4672 - val_loss: 0.4086 - val_mae: 0.4414
129/129 [==============================] - 0s 523us/step
129/129 [==============================] - 0s 523us/step

=== Hasil Eksperimen Fungsi Aktivasi ===

=== Hasil Eksperimen Fungsi Aktivasi ===
activation rmse r2 mae epochs_trained
0 relu 0.614292 0.712033 0.447390 30
1 sigmoid 0.571401 0.750842 0.395727 30
2 tanh 0.563709 0.757505 0.386141 30
3 elu 0.629712 0.697395 0.438780 27
4 Custom 0.646731 0.680817 0.445209 14
5 selu 0.624856 0.702043 0.432048 27

129/129 [==============================] - 0s 477us/step
129/129 [==============================] - 0s 477us/step
129/129 [==============================] - 0s 508us/step
129/129 [==============================] - 0s 508us/step


Fungsi aktivasi terbaik berdasarkan RMSE: tanh (RMSE = 0.5637)
Fungsi aktivasi terburuk berdasarkan RMSE: Custom (RMSE = 0.6467)

Penjelasan Fungsi Aktivasi

  1. ReLU (Rectified Linear Unit):
    • Formula: f(x) = max(0, x)
    • Karakteristik: Fungsi sederhana yang mengeluarkan 0 untuk input negatif dan nilai input itu sendiri untuk input positif
    • Kelebihan: Komputasi efisien, mengatasi masalah vanishing gradient, mempercepat konvergensi
    • Kekurangan: Dapat menghasilkan “dying ReLU” di mana neuron berhenti belajar jika nilainya negatif
  2. Sigmoid:
    • Formula: f(x) = 1 / (1 + e^(-x))
    • Karakteristik: Memetakan nilai input ke antara 0 dan 1
    • Kelebihan: Memberikan output probabilistik, diferensiabel
    • Kekurangan: Rentan terhadap vanishing gradient, fungsi aktivasi lebih lambat
  3. Tanh (Hyperbolic Tangent):
    • Formula: f(x) = (e^x - e^(-x)) / (e^x + e^(-x))
    • Karakteristik: Memetakan nilai input ke antara -1 dan 1
    • Kelebihan: Output terpusat pada nol, baik untuk data yang terpusat pada nol
    • Kekurangan: Masih dapat mengalami vanishing gradient untuk nilai ekstrem
  4. ELU (Exponential Linear Unit):
    • Formula: f(x) = x jika x > 0, α(e^x - 1) jika x ≤ 0
    • Karakteristik: Seperti ReLU tetapi memiliki nilai negatif untuk input negatif
    • Kelebihan: Mengatasi masalah dying neuron, mengurangi bias shift
    • Kekurangan: Komputasi lebih mahal daripada ReLU
  5. LeakyReLU:
    • Formula: f(x) = x jika x > 0, αx jika x ≤ 0 (dimana α adalah konstanta kecil)
    • Karakteristik: Versi ReLU yang memberikan gradien kecil untuk input negatif
    • Kelebihan: Mengatasi masalah dying neuron dari ReLU
    • Kekurangan: Parameter α perlu ditentukan secara manual
  6. SELU (Scaled Exponential Linear Unit):
    • Formula: f(x) = λx jika x > 0, λα(e^x - 1) jika x ≤ 0
    • Karakteristik: ELU dengan penskalaan untuk self-normalizing
    • Kelebihan: Self-normalizing (menjaga mean dan variance tetap stabil)
    • Kekurangan: Kinerja optimal memerlukan inisialisasi bobot tertentu (LeCun normal)