{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "e5e0f994" }, "source": [ "# 🚀 Baseline XGBoost for Resource Estimation of CNNs (Keras Applications)\n", "This notebook demonstrates how to use XGBoost for predicting resource usage (like fit time) of CNN models based on dataset features." ] }, { "cell_type": "markdown", "metadata": { "id": "275c013b" }, "source": [ "## 1️⃣ Setup and Installation\n", "Ensure required libraries are installed." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "DPbLUZKvRtwx", "outputId": "d65bcfd7-a615-4b74-feb6-757456f42581" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Found existing installation: scikit-learn 1.6.1\n", "Uninstalling scikit-learn-1.6.1:\n", " Successfully uninstalled scikit-learn-1.6.1\n", "Collecting scikit-learn==1.5.2\n", " Downloading scikit_learn-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (13 kB)\n", "Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.11/dist-packages (from scikit-learn==1.5.2) (2.0.2)\n", "Requirement already satisfied: scipy>=1.6.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn==1.5.2) (1.14.1)\n", "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn==1.5.2) (1.4.2)\n", "Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn==1.5.2) (3.6.0)\n", "Downloading scikit_learn-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.3 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.3/13.3 MB\u001b[0m \u001b[31m34.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hInstalling collected packages: scikit-learn\n", "Successfully installed scikit-learn-1.5.2\n" ] } ], "source": [ "!pip uninstall -y scikit-learn\n", "!pip install scikit-learn==1.5.2" ] }, { "cell_type": "markdown", "metadata": { "id": "48b0b5f0" }, "source": [ "## 2️⃣ Import Libraries\n", "Import all necessary Python libraries for data handling, modeling, and visualization." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "V23vhp8o9YHM" }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.metrics import mean_squared_error\n", "from xgboost import XGBRegressor\n", "import joblib" ] }, { "cell_type": "markdown", "metadata": { "id": "107733d4" }, "source": [ "## 3️⃣ Data Loading & Preprocessing\n", "Load the dataset and perform basic preprocessing to prepare for modeling." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "UoYmjX7NGVVD" }, "outputs": [], "source": [ "def calculate_mspe_rmspe(y_true, y_pred):\n", " mape = np.mean(np.abs((y_true - y_pred) / (y_true)), axis=0) * 100\n", " mspe = np.mean(((y_true - y_pred) / y_true) ** 2, axis=0) * 100 # MSPE for each column\n", " rmspe = np.sqrt(mspe) # RMSPE for each column\n", " return mape, mspe, rmspe\n", "\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "CmmE7SNz-KXJ", "outputId": "dc55b8bf-2000-4954-b231-664d715851de" }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "Index(['name', 'samples', 'input_dim_w', 'input_dim_h', 'input_dim_c',\n", " 'output_dim', 'optimizer', 'epochs', 'batch', 'learn_rate',\n", " 'tf_version', 'cuda_version', 'batch_time', 'epoch_time', 'fit_time',\n", " 'npz_path', 'gpu_make', 'gpu_name', 'gpu_arch', 'gpu_cc',\n", " 'gpu_core_count', 'gpu_sm_count', 'gpu_memory_size', 'gpu_memory_type',\n", " 'gpu_memory_bw', 'gpu_tensor_core_count', 'max_memory_util',\n", " 'avg_memory_util', 'max_gpu_util', 'avg_gpu_util', 'max_gpu_temp',\n", " 'avg_gpu_temp'],\n", " dtype='object')" ] }, "metadata": {}, "execution_count": 7 } ], "source": [ "# Load data\n", "# Assuming the data is in a CSV file with the target column 'fit_time_in_TF'\n", "data_path = 'dataset-new.csv' # Replace with the actual path to your dataset\n", "df = pd.read_csv(data_path)\n", "# Another way is to directly load the data from the hugging face where the dataset is hosted\n", "# url = 'https://huggingface.co/datasets/ICICLE-AI/ResourceEstimation_HLOGenCNN/resolve/main/dataset-new.csv'\n", "# df = pd.read_csv(url)\n", "df.columns" ] }, { "cell_type": "markdown", "metadata": { "id": "962d5030" }, "source": [ "## 4️⃣ Feature Engineering\n", "Extract relevant features and clean the dataset." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "OGzq5lrIHh2R", "outputId": "9ee9eabd-7363-451a-b379-013b1aa7688d" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ " name unit_name\n", "0 MobileNet_architecture_optadam_s1_ipd224x224x3... MobileNet\n", "1 MobileNet_architecture_optadam_s1_ipd224x224x3... MobileNet\n", "2 MobileNet_architecture_optadam_s1_ipd224x224x3... MobileNet\n", "3 MobileNet_architecture_optadam_s1_ipd224x224x3... MobileNet\n", "4 MobileNet_architecture_optadam_s1_ipd224x224x3... MobileNet\n" ] } ], "source": [ "# Extract substring before the first underscore\n", "df['unit_name'] = df['name'].str.split('_').str[0]\n", "\n", "# Display the updated DataFrame\n", "print(df[['name', 'unit_name']].head())" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "mw1fY7vM-fCw", "outputId": "a8903a63-83f3-4b6f-b061-4a4bb900cd1d" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "\n", "Label Mapping: {'DenseNet121': 0, 'DenseNet169': 1, 'DenseNet201': 2, 'EfficientNetB0': 3, 'EfficientNetB1': 4, 'EfficientNetB7': 5, 'InceptionV3': 6, 'MobileNet': 7, 'MobileNetV2': 8, 'NASNetLarge': 9, 'NASNetMobile': 10, 'ResNet101': 11, 'ResNet152': 12, 'ResNet50': 13, 'VGG16': 14, 'VGG19': 15, 'Xception': 16}\n" ] } ], "source": [ "df = df.dropna() # Dropping rows with missing values (you can customize this)\n", "\n", "from sklearn.preprocessing import LabelEncoder\n", "label_encoder = LabelEncoder()\n", "# Transform the categorical column\n", "df['unit_name_encoded'] = label_encoder.fit_transform(df['unit_name'])\n", "# Optional: Mapping of encoded labels to original categories\n", "mapping = dict(zip(label_encoder.classes_, range(len(label_encoder.classes_))))\n", "print(\"\\nLabel Mapping:\", mapping)\n", "\n", "df = df.drop(columns=['name', 'npz_path', 'unit_name'])\n", "# Convert categorical features to numeric (if any)\n", "df = pd.get_dummies(df, drop_first=True)\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 290 }, "id": "04XJeqln-g4n", "outputId": "f63e3507-6770-41be-dc9c-b03a3cb232a6" }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ " samples input_dim_w input_dim_h input_dim_c output_dim epochs batch \\\n", "0 1 224 224 3 10 1 1 \n", "1 1 224 224 3 10 1 1 \n", "2 1 224 224 3 10 1 1 \n", "3 1 224 224 3 10 2 1 \n", "4 1 224 224 3 10 2 1 \n", "\n", " learn_rate cuda_version batch_time ... max_gpu_util avg_gpu_util \\\n", "0 0.0100 12.2 22.07 ... 13.0 0.51 \n", "1 0.0010 12.2 18.44 ... 100.0 2.92 \n", "2 0.0001 12.2 18.78 ... 26.0 0.86 \n", "3 0.0100 12.2 9.38 ... 28.0 1.78 \n", "4 0.0010 12.2 9.30 ... 100.0 3.41 \n", "\n", " max_gpu_temp avg_gpu_temp unit_name_encoded optimizer_sgd \\\n", "0 25.0 25.00 7 False \n", "1 26.0 25.84 7 False \n", "2 26.0 26.00 7 False \n", "3 27.0 26.04 7 False \n", "4 27.0 26.55 7 False \n", "\n", " gpu_name_Tesla P100-PCIE-16GB gpu_name_Tesla V100S-PCIE-32GB \\\n", "0 True False \n", "1 True False \n", "2 True False \n", "3 True False \n", "4 True False \n", "\n", " gpu_arch_Tesla gpu_memory_type_hbm2e \n", "0 True False \n", "1 True False \n", "2 True False \n", "3 True False \n", "4 True False \n", "\n", "[5 rows x 30 columns]" ], "text/html": [ "\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
samplesinput_dim_winput_dim_hinput_dim_coutput_dimepochsbatchlearn_ratecuda_versionbatch_time...max_gpu_utilavg_gpu_utilmax_gpu_tempavg_gpu_tempunit_name_encodedoptimizer_sgdgpu_name_Tesla P100-PCIE-16GBgpu_name_Tesla V100S-PCIE-32GBgpu_arch_Teslagpu_memory_type_hbm2e
01224224310110.010012.222.07...13.00.5125.025.007FalseTrueFalseTrueFalse
11224224310110.001012.218.44...100.02.9226.025.847FalseTrueFalseTrueFalse
21224224310110.000112.218.78...26.00.8626.026.007FalseTrueFalseTrueFalse
31224224310210.010012.29.38...28.01.7827.026.047FalseTrueFalseTrueFalse
41224224310210.001012.29.30...100.03.4127.026.557FalseTrueFalseTrueFalse
\n", "

5 rows × 30 columns

\n", "
\n", "
\n", "\n", "
\n", " \n", "\n", " \n", "\n", " \n", "
\n", "\n", "\n", "
\n", " \n", "\n", "\n", "\n", " \n", "
\n", "\n", "
\n", "
\n" ], "application/vnd.google.colaboratory.intrinsic+json": { "type": "dataframe", "variable_name": "df" } }, "metadata": {}, "execution_count": 10 } ], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "83f8b988" }, "source": [ "## 5️⃣ Train-Test Split\n", "Split the dataset into training and testing sets." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "m8xtdVgq_ZBt" }, "outputs": [], "source": [ "# Example: Split based on a numerical condition\n", "train_data = df[df['unit_name_encoded'] >= 6]\n", "test_data = df[df['unit_name_encoded'] < 6]\n", "\n", "train_data = train_data.sample(frac=0.2, random_state=42)\n", "\n", "# Separate features and target\n", "\n", "X_train = train_data.drop(columns=['max_memory_util',\t'avg_memory_util',\t'max_gpu_util',\t'avg_gpu_util',\t'max_gpu_temp',\t'avg_gpu_temp', 'epoch_time',\t'fit_time'])\n", "y_train = train_data[['epoch_time', 'fit_time', 'max_memory_util', 'max_gpu_util']]\n", "X_test = test_data.drop(columns=['max_memory_util',\t'avg_memory_util',\t'max_gpu_util',\t'avg_gpu_util',\t'max_gpu_temp',\t'avg_gpu_temp', 'epoch_time',\t'fit_time']) # Replace 'fit_time_in_TF' with your target column\n", "y_test = test_data[['epoch_time', 'fit_time', 'max_memory_util', 'max_gpu_util']]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0B124aBf-i7l", "outputId": "5de51f6b-fbc7-4bbe-8849-89b9667f3218" }, "outputs": [ { "data": { "text/plain": [ "(1553, 27)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_data.shape" ] }, { "cell_type": "markdown", "metadata": { "id": "3250bbd6" }, "source": [ "## 6️⃣ Model Building with XGBoost\n", "Define, train, and predict using the XGBoost Regressor." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 253 }, "id": "yc6bxIBq_ZuP", "outputId": "83fb3bd1-d5fa-48aa-dc23-5248667f2974" }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "XGBRegressor(base_score=None, booster=None, callbacks=None,\n", " colsample_bylevel=None, colsample_bynode=None,\n", " colsample_bytree=None, device=None, early_stopping_rounds=None,\n", " enable_categorical=False, eval_metric=None, feature_types=None,\n", " gamma=None, grow_policy=None, importance_type=None,\n", " interaction_constraints=None, learning_rate=0.1, max_bin=None,\n", " max_cat_threshold=None, max_cat_to_onehot=None,\n", " max_delta_step=None, max_depth=6, max_leaves=None,\n", " min_child_weight=None, missing=nan, monotone_constraints=None,\n", " multi_strategy=None, n_estimators=100, n_jobs=None,\n", " num_parallel_tree=None, random_state=42, ...)" ], "text/html": [ "
XGBRegressor(base_score=None, booster=None, callbacks=None,\n",
              "             colsample_bylevel=None, colsample_bynode=None,\n",
              "             colsample_bytree=None, device=None, early_stopping_rounds=None,\n",
              "             enable_categorical=False, eval_metric=None, feature_types=None,\n",
              "             gamma=None, grow_policy=None, importance_type=None,\n",
              "             interaction_constraints=None, learning_rate=0.1, max_bin=None,\n",
              "             max_cat_threshold=None, max_cat_to_onehot=None,\n",
              "             max_delta_step=None, max_depth=6, max_leaves=None,\n",
              "             min_child_weight=None, missing=nan, monotone_constraints=None,\n",
              "             multi_strategy=None, n_estimators=100, n_jobs=None,\n",
              "             num_parallel_tree=None, random_state=42, ...)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" ] }, "metadata": {}, "execution_count": 12 } ], "source": [ "xgb_model = XGBRegressor(\n", " n_estimators=100,\n", " learning_rate=0.1,\n", " max_depth=6,\n", " random_state=42,\n", " verbosity=1\n", ")\n", "\n", "# Train the model\n", "xgb_model.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "4moppOvOFnZD" }, "outputs": [], "source": [ "# Predict on the test set\n", "y_pred = xgb_model.predict(X_test)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "xIwFbkEyNuby", "outputId": "2f9fb71f-b828-41b6-e9bb-2a5747a18444" }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "array([[21.514431 , 20.075377 , 5.7195935, 48.045883 ],\n", " [19.894218 , 19.883572 , 10.0262985, 69.34461 ],\n", " [20.112694 , 20.007633 , 11.493458 , 73.78841 ],\n", " [12.975633 , 18.08273 , 7.1654763, 45.41951 ],\n", " [12.418544 , 18.560135 , 8.162864 , 59.944546 ],\n", " [12.637019 , 18.684196 , 8.962711 , 67.1194 ],\n", " [ 6.4134297, 20.612186 , 10.105451 , 60.0018 ],\n", " [ 5.8482485, 21.089592 , 10.060244 , 67.4093 ],\n", " [ 5.8482485, 20.968369 , 10.259554 , 74.447716 ],\n", " [ 4.295031 , 22.474957 , 10.036483 , 67.872734 ]], dtype=float32)" ] }, "metadata": {}, "execution_count": 15 } ], "source": [ "y_pred[:10]" ] }, { "cell_type": "markdown", "metadata": { "id": "b996833d" }, "source": [ "## 7️⃣ Evaluation Metrics\n", "Calculate MAPE, MSPE, RMSPE, and standard regression metrics." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "gL5b_8Fb_lyj", "outputId": "05c40163-4803-4788-cc3d-34d0e9438dea" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ ":2: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame.\n", "Try using .loc[row_indexer,col_indexer] = value instead\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " df1['fit_time_res'] = (df1['batch_time'] / df1['batch']) * df1['samples'] * df1['epochs']\n" ] } ], "source": [ "def manualCalculate(df1):\n", " df1['fit_time_res'] = (df1['batch_time'] / df1['batch']) * df1['samples'] * df1['epochs']\n", " return df1[['fit_time_res']]\n", "\n", "y_manual = manualCalculate(X_test[['batch_time', 'batch', 'samples', 'epochs']])" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "zHc7GLmw_eDu", "outputId": "7240f37a-3dc2-4ae9-a3ac-08b977714fbd" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Target Column 1:\n", " MSE: 1245.5831\n", " RMSE: 35.2928\n", " MAPE: 35.2928%\n", " MSPE: 42.7299%\n", " RMSPE: 6.5368%\n", "Target Column 2:\n", " MSE: 4263.1022\n", " RMSE: 65.2924\n", " MAPE: 65.2924%\n", " MSPE: 43.7782%\n", " RMSPE: 6.6165%\n", "Target Column 3:\n", " MSE: 99.2517\n", " RMSE: 9.9625\n", " MAPE: 9.9625%\n", " MSPE: inf%\n", " RMSPE: inf%\n", "Target Column 4:\n", " MSE: 456.3985\n", " RMSE: 21.3635\n", " MAPE: 21.3635%\n", " MSPE: inf%\n", " RMSPE: inf%\n" ] }, { "output_type": "execute_result", "data": { "text/plain": [ "['xgb_model_model.pkl']" ] }, "metadata": {}, "execution_count": 17 } ], "source": [ "\n", "\n", "mape__per_column, mspe_per_column, rmspe_per_column = calculate_mspe_rmspe(y_test, y_pred)\n", "\n", "mse_per_column = mean_squared_error(y_test, y_pred, multioutput='raw_values') # MSE for each column\n", "rmse_per_column = np.sqrt(mse_per_column) # RMSE for each column\n", "\n", "# Display results\n", "for i, (mse, rmse, mape, mspe, rmspe) in enumerate(zip(mse_per_column, rmse_per_column, mape__per_column, mspe_per_column, rmspe_per_column)):\n", " print(f\"Target Column {i + 1}:\")\n", " print(f\" MSE: {mse:.4f}\")\n", " print(f\" RMSE: {rmse:.4f}\")\n", " print(f\" MAPE: {rmse:.4f}%\")\n", " print(f\" MSPE: {mspe:.4f}%\")\n", " print(f\" RMSPE: {rmspe:.4f}%\")\n", "\n", "# Save the model for future use\n", "joblib.dump(xgb_model, 'xgb_model_model.pkl')\n", "\n", "# Example of loading the model\n", "# loaded_model = joblib.load('random_forest_model.pkl')" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VR0GKYPTBQqu", "outputId": "6ae3a663-c9d8-4c9d-a4ea-7db03208de35" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Target Column 1:\n", " MSE: 111.3580\n", " RMSE: 10.5526\n", " MAPE: 10.5526%\n" ] }, { "output_type": "execute_result", "data": { "text/plain": [ "['xgb_model_model.pkl']" ] }, "metadata": {}, "execution_count": 18 } ], "source": [ "\n", "\n", "mape__per_column, mspe_per_column, rmspe_per_column = calculate_mspe_rmspe(y_test[['fit_time']], y_manual)\n", "\n", "mse_per_column = mean_squared_error(y_test[['fit_time']], y_manual, multioutput='raw_values') # MSE for each column\n", "rmse_per_column = np.sqrt(mse_per_column) # RMSE for each column\n", "\n", "# Display results\n", "for i, (mse, rmse, mape, mspe, rmspe) in enumerate(zip(mse_per_column, rmse_per_column, mape__per_column, mspe_per_column, rmspe_per_column)):\n", " print(f\"Target Column {i + 1}:\")\n", " print(f\" MSE: {mse:.4f}\")\n", " print(f\" RMSE: {rmse:.4f}\")\n", " print(f\" MAPE: {rmse:.4f}%\")\n", "\n", "# Save the model for future use\n", "joblib.dump(xgb_model, 'xgb_model_model.pkl')\n", "\n", "# Example of loading the model\n", "# loaded_model = joblib.load('random_forest_model.pkl')" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "c0a4ea0a" }, "outputs": [], "source": [ "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "4f37c35d", "outputId": "6be2112f-41c1-4b0e-f367-f9ab8dd4f839" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "MAE: 24.9825\n", "RMSE: 32.9778\n", "R²: 0.2661\n" ] }, { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.11/dist-packages/sklearn/metrics/_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.\n", " warnings.warn(\n" ] } ], "source": [ "# Standard Regression Metrics\n", "mae = mean_absolute_error(y_test, y_pred)\n", "rmse = mean_squared_error(y_test, y_pred, squared=False)\n", "r2 = r2_score(y_test, y_pred)\n", "\n", "print(f'MAE: {mae:.4f}')\n", "print(f'RMSE: {rmse:.4f}')\n", "print(f'R²: {r2:.4f}')" ] }, { "cell_type": "markdown", "metadata": { "id": "c3eb4fc3" }, "source": [ "## 9️⃣ Conclusion\n", "Summarize key insights from the model performance and visualizations." ] }, { "cell_type": "markdown", "metadata": { "id": "c78d5e48" }, "source": [ "### ✅ In Conclusion:\n", "- The XGBoost model provides a reasonable baseline for predicting CNN resource usage.\n", "- Visualization highlights areas where predictions deviate.\n", "- Feature importance gives insights into which factors most influence fit time.\n", "\n", "For future work, hyperparameter tuning and advanced models could improve accuracy." ] } ], "metadata": { "accelerator": "GPU", "colab": { "gpuType": "T4", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }